Publications

Displaying 301 - 400 of 635
  • Konishi, M., Fujita, M., Takeuchi, Y., Kubo, K., Imano, N., Nishibuchi, I., Murakami, Y., Shimabukuro, K., Wongratwanich, P., Verdonschot, R. G., Kakimoto, N., & Nagata, Y. (2021). Treatment outcomes of real-time intraoral sonography-guided implantation technique of 198Au grain brachytherapy for T1 and T2 tongue cancer. Journal of Radiation Research, 62(5), 871-876. doi:10.1093/jrr/rrab059.

    Abstract

    It is often challenging to determine the accurate size and shape of oral lesions through computed tomography (CT) or magnetic resonance imaging (MRI) when they are very small or obscured by metallic artifacts, such as dental prostheses. Intraoral ultrasonography (IUS) has been shown to be beneficial in obtaining precise information about total tumor extension, as well as the exact location and guiding the insertion of catheters during interstitial brachytherapy. We evaluated the role of IUS in assessing the clinical outcomes of interstitial brachytherapy with 198Au grains in tongue cancer through a retrospective medical chart review. The data from 45 patients with T1 (n = 21) and T2 (n = 24) tongue cancer, who were mainly treated with 198Au grain implants between January 2005 and April 2019, were included in this study. 198Au grain implantations were carried out, and positioning of the implants was confirmed by IUS, to ensure that 198Au grains were appropriately placed for the deep border of the tongue lesion. The five-year local control rates of T1 and T2 tongue cancers were 95.2% and 95.5%, respectively. We propose that the use of IUS to identify the extent of lesions and the position of implanted grains is effective when performing brachytherapy with 198Au grains.
  • Konopka, A. E. (2012). Planning ahead: How recent experience with structures and words changes the scope of linguistic planning. Journal of Memory and Language, 66, 143-162. doi:10.1016/j.jml.2011.08.003.

    Abstract

    The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation.
  • Korecky-Kröll, K., Libben, G., Stempfer, N., Wiesinger, J., Reinisch, E., Bertl, J., & Dressler, W. U. (2012). Helping a crocodile to learn German plurals: Children’s online judgment of actual, potential and illegal plural forms. Morphology, 22, 35-65. doi:10.1007/s11525-011-9191-8.

    Abstract

    A substantial tradition of linguistic inquiry has framed the knowledge of native speakers in terms of their ability to determine the grammatical acceptability of language forms that they encounter for the first time. In the domain of morphology, the productivity framework of Dressler (CLASNET Working papers 7, 1997) has emphasized the importance of this ability in terms of the graded potentiality of non-existing multimorphemic forms. The goal of this study was to investigate what role the notion of potentiality plays in online lexical well-formedness judgment among children who are native speakers of Austrian German. A total of 114 children between the ages of six and ten and a total of 40 adults between the ages of 18 and 30 (as a comparison group) participated in an online well-formedness judgment task which focused on pluralized German nouns. Concrete, picturable, high frequency German nouns were presented in three pluralized forms: (a) actual existing plural form, (b) morphologically illegal plural form, (c) potential (but not existing) plural form. Participants were shown pictures of the nouns (as a set of three identical items) and simultaneously heard one of three pluralized forms for each noun. Response latency and judgment type served as dependent variables. Results indicate that both children and adults are sensitive to the distinction between illegal and potential forms (neither of which they would have encountered). For all participants, plural frequency (rather than frequency of the singular form) affected responses for both existing and non-existing words. Other factors increasing acceptability were the presence of supplementary umlaut in addition to suffixation and homophony with existing words or word forms.
  • Kos, M., Van den Brink, D., Snijders, T. M., Rijpkema, M., Franke, B., Fernandez, G., Hagoort, P., & Whitehouse, A. (2012). CNTNAP2 and language processing in healthy individuals as measured with ERPs. PLoS One, 7(10), e46995. doi:10.1371/journal.pone.0046995.

    Abstract

    The genetic FOXP2-CNTNAP2 pathway has been shown to be involved in the language capacity. We investigated whether a common variant of CNTNAP2 (rs7794745) is relevant for syntactic and semantic processing in the general population by using a visual sentence processing paradigm while recording ERPs in 49 healthy adults. While both AA homozygotes and T-carriers showed a standard N400 effect to semantic anomalies, the response to subject-verb agreement violations differed across genotype groups. T-carriers displayed an anterior negativity preceding the P600 effect, whereas for the AA group only a P600 effect was observed. These results provide another piece of evidence that the neuronal architecture of the human faculty of language is shaped differently by effects that are genetically determined.
  • Kos, M., Van den Brink, D., & Hagoort, P. (2012). Individual variation in the late positive complex to semantic anomalies. Frontiers in Psychology, 3, 318. doi:10.3389/fpsyg.2012.00318.

    Abstract

    It is well-known that, within ERP paradigms of sentence processing, semantically anomalous words elicit N400 effects. Less clear, however, is what happens after the N400. In some cases N400 effects are followed by Late Positive Complexes (LPC), whereas in other cases such effects are lacking. We investigated several factors which could affect the LPC, such as contextual constraint, inter-individual variation and working memory. Seventy-two participants read sentences containing a semantic manipulation (Whipped cream tastes sweet/anxious and creamy). Neither contextual constraint nor working memory correlated with the LPC. Inter-individual variation played a substantial role in the elicitation of the LPC with about half of the participants showing a negative response and the other half showing an LPC. This individual variation correlated with a syntactic ERP as well as an alternative semantic manipulation. In conclusion, our results show that inter-individual variation plays a large role in the elicitation of the LPC and this may account for the diversity in LPC findings in language research.
  • Kösem, A., & van Wassenhove, V. (2012). Temporal Structure in Audiovisual Sensory Selection. PLoS One, 7(7): e40936. doi:10.1371/journal.pone.0040936.

    Abstract

    In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object): the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar). Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.
  • Kreuzer, H. (Ed.). (1971). Methodische Perspektiven [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (1/2).
  • Kurt, S., Fisher, S. E., & Ehret, G. (2012). Foxp2 mutations impair auditory-motor-association learning. PLoS One, 7(3), e33130. doi:10.1371/journal.pone.0033130.

    Abstract

    Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.
  • Lai, V. T., Hagoort, P., & Casasanto, D. (2012). Affective primacy vs. cognitive primacy: Dissolving the debate. Frontiers in Psychology, 3, 243. doi:10.3389/fpsyg.2012.00243.

    Abstract

    When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur.
  • Lattenkamp, E. Z., Hörpel, S. G., Mengede, J., & Firzlaff, U. (2021). A researcher’s guide to the comparison of vocal production learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200237. doi:10.1098/rstb.2020.0237.

    Abstract

    Vocal production learning (VPL) is the capacity to learn to produce new vocalizations, which is a rare ability in the animal kingdom and thus far has only been identified in a handful of mammalian taxa and three groups of birds. Over the last few decades, approaches to the demonstration of VPL have varied among taxa, sound production systems and functions. These discrepancies strongly impede direct comparisons between studies. In the light of the growing number of experimental studies reporting VPL, the need for comparability is becoming more and more pressing. The comparative evaluation of VPL across studies would be facilitated by unified and generalized reporting standards, which would allow a better positioning of species on any proposed VPL continuum. In this paper, we specifically highlight five factors influencing the comparability of VPL assessments: (i) comparison to an acoustic baseline, (ii) comprehensive reporting of acoustic parameters, (iii) extended reporting of training conditions and durations, (iv) investigating VPL function via behavioural, perception-based experiments and (v) validation of findings on a neuronal level. These guidelines emphasize the importance of comparability between studies in order to unify the field of vocal learning.
  • Lattenkamp, E. Z., Linnenschmidt, M., Mardus, E., Vernes, S. C., Wiegrebe, L., & Schutte, M. (2021). The vocal development of the pale spear-nosed bat is dependent on auditory feedback. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200253. doi:10.1098/rstb.2020.0253.

    Abstract

    Human vocal development and speech learning require acoustic feedback, and
    humans who are born deaf do not acquire a normal adult speech capacity. Most
    other mammals display a largely innate vocal repertoire. Like humans, bats are
    thought to be one of the few taxa capable of vocal learning as they can acquire
    new vocalizations by modifying vocalizations according to auditory experiences.
    We investigated the effect of acoustic deafening on the vocal development of the
    pale spear-nosed bat. Three juvenile pale spear-nosed bats were deafened, and
    their vocal development was studied in comparison with an age-matched, hear-
    ing control group. The results show that during development the deafened bats
    increased their vocal activity, and their vocalizations were substantially altered,
    being much shorter, higher in pitch, and more aperiodic than the vocalizations
    of the control animals. The pale spear-nosed bat relies on auditory feedback
    for vocal development and, in the absence of auditory input, species-atypical
    vocalizations are acquired. This work serves as a basis for further research
    using the pale spear-nosed bat as a mammalian model for vocal learning, and
    contributes to comparative studies on hearing impairment across species.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Lattenkamp, E. Z., Nagy, M., Drexl, M., Vernes, S. C., Wiegrebe, L., & Knörnschild, M. (2021). Hearing sensitivity and amplitude coding in bats are differentially shaped by echolocation calls and social calls. Proceedings of the Royal Society B: Biological Sciences, 288(1942): 20202600. doi:10.1098/rspb.2020.2600.

    Abstract

    Differences in auditory perception between species are influenced by phylogenetic origin and the perceptual challenges imposed by the natural environment, such as detecting prey- or predator-generated sounds and communication signals. Bats are well suited for comparative studies on auditory perception since they predominantly rely on echolocation to perceive the world, while their social calls and most environmental sounds have low frequencies. We tested if hearing sensitivity and stimulus level coding in bats differ between high and low-frequency ranges by measuring auditory brainstem responses (ABRs) of 86 bats belonging to 11 species. In most species, auditory sensitivity was equally good at both high- and low-frequency ranges, while amplitude was more finely coded for higher frequency ranges. Additionally, we conducted a phylogenetic comparative analysis by combining our ABR data with published data on 27 species. Species-specific peaks in hearing sensitivity correlated with peak frequencies of echolocation calls and pup isolation calls, suggesting that changes in hearing sensitivity evolved in response to frequency changes of echolocation and social calls. Overall, our study provides the most comprehensive comparative assessment of bat hearing capacities to date and highlights the evolutionary pressures acting on their sensory perception.

    Additional information

    data
  • Law, R., & Pylkkänen, L. (2021). Lists with and without syntax: A new approach to measuring the neural processing of syntax. The Journal of Neuroscience, 41(10), 2186-2196. doi:10.1523/JNEUROSCI.1179-20.2021.

    Abstract

    In the neurobiology of language, a fundamental challenge is deconfounding syntax from semantics. Changes in syntactic structure usually correlate with changes in meaning. We approached this challenge from a new angle. We deployed word lists, which are usually the unstructured control in studies of syntax, as both the test and the control stimulus. Three-noun lists (lamps, dolls, guitars) were embedded in sentences (The eccentric man hoarded lamps, dolls, guitars…) and in longer lists (forks, pen, toilet, rodeo, graves, drums, mulch, lamps, dolls, guitars…). This allowed us to perfectly control both lexical characteristics and local combinatorics: the same words occurred in both conditions and in neither case did the list items locally compose into phrases (e.g. ‘lamps’ and ‘dolls’ do not form a phrase). But in one case, the list partakes in a syntactic tree, while in the other, it does not. Being embedded inside a syntactic tree increased source-localized MEG activity at ~250-300ms from word onset in the left inferior frontal cortex, at ~300-350ms in the left anterior temporal lobe and, most reliably, at ~330-400ms in left posterior temporal cortex. In contrast, effects of semantic association strength, which we also varied, localized in left temporo-parietal cortex, with high associations increasing activity at around 400ms. This dissociation offers a novel characterization of the structure vs. meaning contrast in the brain: The fronto-temporal network that is familiar from studies of sentence processing can be driven by the sheer presence of global sentence structure, while associative semantics has a more posterior neural signature.

    Additional information

    Link to Preprint on BioRxiv
  • Lehtonen, M., Hulten, A., Rodríguez-Fornells, A., Cunillera, T., Tuomainen, J., & Laine, M. (2012). Differences in word recognition between early bilinguals and monolinguals: Behavioral and ERP evidence. Neuropsychologia, 50, 1362-1371. doi:10.1016/j.neuropsychologia.2012.02.021.

    Abstract

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilingualś nondominant vs. dominant language and in some studies also when compared to corresponding monolinguals. In ERPs, language processing differences between bilinguals vs. monolinguals have typically been found in the N400 component. In the present study, highly proficient Finnish-Swedish bilinguals who had acquired both languages during childhood were compared to Finnish monolinguals during a visual lexical decision task and simultaneous ERP recordings. Behaviorally, we found that the response latencies were overall longer in bilinguals than monolinguals, and that the effects for all three factors, frequency, morphology, and lexicality were also larger in bilinguals even though they had acquired both languages early and were highly proficient in them. In line with this, the N400 effects induced by frequency, morphology, and lexicality were larger for bilinguals than monolinguals. Furthermore, the ERP results also suggest that while most inflected Finnish words are decomposed into stem and suffix, only monolinguals have encountered high frequency inflected word forms often enough to develop full-form representations for them. Larger behavioral and neural effects in bilinguals in these factors likely reflect lower amount of exposure to words compared to monolinguals, as the language input of bilinguals is divided between two languages.
  • Lemen, H., Lieven, E., & Theakston, A. (2021). A comparison of the pragmatic patterns in the spontaneous because- and if-sentences produced by children and their caregivers. Journal of Pragmatics, 185, 15-34. doi:10.1016/j.pragma.2021.07.016.

    Abstract

    Findings from corpus (e.g. Diessel, 2004) and comprehension (e.g. De Ruiter et al., 2018) studies show that children produce the adverbial connectives because and if long before they seem able to understand them. However, although children's comprehension is typically tested on sentences expressing the pragmatic relationship which Sweetser (1990) calls “Content”, children also hear and produce sentences expressing “Speech–Act” relationships (e.g. De Ruiter et al., 2021; Kyratzis et al., 1990). To better understand the possible influence of pragmatic variation on 2- to 4- year-old children's acquisition of these connectives, we coded the because and if Speech–Act sentences of 14 British English-speaking mother-child dyads for the type of illocutionary act they contained, as well as the phrasing following the connective. Analyses revealed that children's because Speech–Act sentences were primarily explanations of Statements/Claims, while their if Speech–Act sentences typically related to permission and politeness. While children's because-sentences showed a great deal of individuality, their if-sentences closely resembled their mothers’, containing a high proportion of recurring phrases which appear to be abstracted from input. We discuss how these patterns might help shape children's understanding of each connective and contribute to the children's overall difficulty with because and if.
  • Lemhöfer, K., & Broersma, M. (2012). Introducing LexTALE: A quick and valid Lexical Test for Advanced Learners of English. Behavior Research Methods, 44, 325-343. doi:10.3758/s13428-011-0146-0.

    Abstract

    The increasing number of experimental studies on second language (L2) processing, frequently with English as the L2, calls for a practical and valid measure of English vocabulary knowledge and proficiency. In a large-scale study with Dutch and Korean speakers of L2 English, we tested whether LexTALE, a 5-min vocabulary test, is a valid predictor of English vocabulary knowledge and, possibly, even of general English proficiency. Furthermore, the validity of LexTALE was compared with that of self-ratings of proficiency, a measure frequently used by L2 researchers. The results showed the following in both speaker groups: (1) LexTALE was a good predictor of English vocabulary knowledge; 2) it also correlated substantially with a measure of general English proficiency; and 3) LexTALE was generally superior to self-ratings in its predictions. LexTALE, but not self-ratings, also correlated highly with previous experimental data on two word recognition paradigms. The test can be carried out on or downloaded from www.lextale.com.
  • Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.

    Abstract

    The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.

    Additional information

    Lesage_Suppl_Information.pdf
  • Lev-Ari, S., & Keysar, B. (2012). Less detailed representation of non-native language: Why non-native speakers’ stories seem more vague. Discourse Processes, 49(7), 523-538. doi:10.1080/0163853X.2012.698493.

    Abstract

    The language of non-native speakers is less reliable than the language of native
    speakers in conveying the speaker’s intentions. We propose that listeners expect
    such reduced reliability and that this leads them to adjust the manner in which they
    process and represent non-native language by representing non-native language
    in less detail. Experiment 1 shows that when people listen to a story, they are
    less able to detect a word change with a non-native than with a native speaker.
    This suggests they represent the language of a non-native speaker with fewer
    details. Experiment 2 shows that, above a certain threshold, the higher participants’
    working memory is, the less they are able to detect the change with a non-native
    speaker. This suggests that adjustment to non-native speakers depends on working
    memory. This research has implications for the role of interpersonal expectations
    in the way people process language.
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. doi:10.1016/0010-0277(83)90026-4.

    Abstract

    Making a self-repair in speech typically proceeds in three phases. The first phase involves the monitoring of one’s own speech and the interruption of the flow of speech when trouble is detected. From an analysis of 959 spontaneous self-repairs it appears that interrupting follows detection promptly, with the exception that correct words tend to be completed. Another finding is that detection of trouble improves towards the end of constituents. The second phase is characterized by hesitation, pausing, but especially the use of so-called editing terms. Which editing term is used depends on the nature of the speech trouble in a rather regular fashion: Speech errors induce other editing terms than words that are merely inappropriate, and trouble which is detected quickly by the speaker is preferably signalled by the use of ‘uh’. The third phase consists of making the repair proper The linguistic well-formedness of a repair is not dependent on the speaker’s respecting the integriv of constituents, but on the structural relation between original utterance and repair. A bi-conditional well-formedness rule links this relation to a corresponding relation between the conjuncts of a coordination. It is suggested that a similar relation holds also between question and answer. In all three cases the speaker respects certain Istructural commitments derived from an original utterance. It was finally shown that the editing term plus the first word of the repair proper almost always contain sufficient information for the listener to decide how the repair should be related to the original utterance. Speakers almost never produce misleading information in this respect. It is argued that speakers have little or no access to their speech production process; self-monitoring is probably based on parsing one’s own inner or overt speech.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M., & Cutler, A. (1983). Prosodic marking in speech repair. Journal of semantics, 2, 205-217. doi:10.1093/semant/2.2.205.

    Abstract

    Spontaneous self-corrections in speech pose a communication problem; the speaker must make clear to the listener not only that the original Utterance was faulty, but where it was faulty and how the fault is to be corrected. Prosodic marking of corrections - making the prosody of the repair noticeably different from that of the original utterance - offers a resource which the speaker can exploit to provide the listener with such information. A corpus of more than 400 spontaneous speech repairs was analysed, and the prosodic characteristics compared with the syntactic and semantic characteristics of each repair. Prosodic marking showed no relationship at all with the syntactic characteristics of repairs. Instead, marking was associated with certain semantic factors: repairs were marked when the original utterance had been actually erroneous, rather than simply less appropriate than the repair; and repairs tended to be marked more often when the set of items encompassing the error and the repair was small rather than when it was large. These findings lend further weight to the characterization of accent as essentially semantic in function.
  • Levelt, W. J. M. (1984). Sprache und Raum. Texten und Schreiben, 20, 18-21.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M. (1983). Wetenschapsbeleid: Drie actuele idolen en een godin. Grafiet, 1(4), 178-184.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levinson, S. C. (2012). Authorship: Include all institutes in publishing index [Correspondence]. Nature, 485, 582. doi:10.1038/485582c.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (2012). Kinship and human thought. Science, 336(6084), 988-989. doi:10.1126/science.1222691.

    Abstract

    Language and communication are central to shaping concepts such as kinship categories.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C. (2012). The original sin of cognitive science. Topics in Cognitive Science, 4, 396-403. doi:10.1111/j.1756-8765.2012.01195.x.

    Abstract

    Classical cognitive science was launched on the premise that the architecture of human cognition is uniform and universal across the species. This premise is biologically impossible and is being actively undermined by, for example, imaging genomics. Anthropology (including archaeology, biological anthropology, linguistics, and cultural anthropology) is, in contrast, largely concerned with the diversification of human culture, language, and biology across time and space—it belongs fundamentally to the evolutionary sciences. The new cognitive sciences that will emerge from the interactions with the biological sciences will focus on variation and diversity, opening the door for rapprochement with anthropology.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.
  • Levshina, N. (2021). Cross-linguistic trade-offs and causal relationships between cues to grammatical subject and object, and the problem of efficiency-related explanations. Frontiers in Psychology, 12: 648200. doi:10.3389/fpsyg.2021.648200.

    Abstract

    Cross-linguistic studies focus on inverse correlations (trade-offs) between linguistic variables that reflect different cues to linguistic meanings. For example, if a language has no case marking, it is likely to rely on word order as a cue for identification of grammatical roles. Such inverse correlations are interpreted as manifestations of language users’ tendency to use language efficiently. The present study argues that this interpretation is problematic. Linguistic variables, such as the presence of case, or flexibility of word order, are aggregate properties, which do not represent the use of linguistic cues in context directly. Still, such variables can be useful for circumscribing the potential role of communicative efficiency in language evolution, if we move from cross-linguistic trade-offs to multivariate causal networks. This idea is illustrated by a case study of linguistic variables related to four types of Subject and Object cues: case marking, rigid word order of Subject and Object, tight semantics and verb-medial order. The variables are obtained from online language corpora in thirty languages, annotated with the Universal Dependencies. The causal model suggests that the relationships between the variables can be explained predominantly by sociolinguistic factors, leaving little space for a potential impact of efficient linguistic behavior.
  • Levshina, N., & Moran, S. (2021). Efficiency in human languages: Corpus evidence for universal principles. Linguistics Vanguard, 7(s3): 20200081. doi:10.1515/lingvan-2020-0081.

    Abstract

    Over the last few years, there has been a growing interest in communicative efficiency. It has been argued that language users act efficiently, saving effort for processing and articulation, and that language structure and use reflect this tendency. The emergence of new corpus data has brought to life numerous studies on efficient language use in the lexicon, in morphosyntax, and in discourse and phonology in different languages. In this introductory paper, we discuss communicative efficiency in human languages, focusing on evidence of efficient language use found in multilingual corpora. The evidence suggests that efficiency is a universal feature of human language. We provide an overview of different manifestations of efficiency on different levels of language structure, and we discuss the major questions and findings so far, some of which are addressed for the first time in the contributions in this special collection.
  • Levshina, N., & Moran, S. (Eds.). (2021). Efficiency in human languages: Corpus evidence for universal principles [Special Issue]. Linguistics Vanguard, 7(s3).
  • Levshina, N. (2021). Communicative efficiency and differential case marking: A reverse-engineering approach. Linguistics Vanguard, 7(s3): 20190087. doi:10.1515/lingvan-2019-0087.
  • Liebal, K., & Haun, D. B. M. (2012). The importance of comparative psychology for developmental science [Review Article]. International Journal of Developmental Science, 6, 21-23. doi:10.3233/DEV-2012-11088.

    Abstract

    The aim of this essay is to elucidate the relevance of cross-species comparisons for the investigation of human behavior and its development. The focus is on the comparison of human children and another group of primates, the non-human great apes, with special attention to their cognitive skills. Integrating a comparative and developmental perspective, we argue, can provide additional answers to central and elusive questions about human behavior in general and its development in particular: What are the heritable predispositions of the human mind? What cognitive traits are uniquely human? In this sense, Developmental Science would benefit from results of Comparative Psychology.
  • Linkenauger, S. A., Lerner, M. D., Ramenzoni, V. C., & Proffitt, D. R. (2012). A perceptual-motor deficit predicts social and communicative impairments in individuals with autism spectrum disorders. Autism Research, 5, 352-362. doi:10.1002/aur.1248.

    Abstract

    Individuals with autism spectrum disorders (ASDs) have known impairments in social and motor skills. Identifying putative underlying mechanisms of these impairments could lead to improved understanding of the etiology of core social/communicative deficits in ASDs, and identification of novel intervention targets. The ability to perceptually integrate one's physical capacities with one's environment (affordance perception) may be such a mechanism. This ability has been theorized to be impaired in ASDs, but this question has never been directly tested. Crucially, affordance perception has shown to be amenable to learning; thus, if it is implicated in deficits in ASDs, it may be a valuable unexplored intervention target. The present study compared affordance perception in adolescents and adults with ASDs to typically developing (TD) controls. Two groups of individuals (adolescents and adults) with ASDs and age-matched TD controls completed well-established action capability estimation tasks (reachability, graspability, and aperture passability). Their caregivers completed a measure of their lifetime social/communicative deficits. Compared with controls, individuals with ASDs showed unprecedented gross impairments in relating information about their bodies' action capabilities to visual information specifying the environment. The magnitude of these deficits strongly predicted the magnitude of social/communicative impairments in individuals with ASDs. Thus, social/communicative impairments in ASDs may derive, at least in part, from deficits in basic perceptual–motor processes (e.g. action capability estimation). Such deficits may impair the ability to maintain and calibrate the relationship between oneself and one's social and physical environments, and present fruitful, novel, and unexplored target for intervention.
  • Liszkowski, U., Brown, P., Callaghan, T., Takada, A., & De Vos, C. (2012). A prelinguistic gestural universal of human communication. Cognitive Science, 36, 698-713. doi:10.1111/j.1551-6709.2011.01228.x.

    Abstract

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10–14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants’ pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers’ and infants’ pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication.
  • Long, M., Moore, I., Mollica, F., & Rubio-Fernandez, P. (2021). Contrast perception as a visual heuristic in the formulation of referential expressions. Cognition, 217: 104879. doi:10.1016/j.cognition.2021.104879.

    Abstract

    We hypothesize that contrast perception works as a visual heuristic, such that when speakers perceive a significant degree of contrast in a visual context, they tend to produce the corresponding adjective to describe a referent. The contrast perception heuristic supports efficient audience design, allowing speakers to produce referential expressions with minimum expenditure of cognitive resources, while facilitating the listener's visual search for the referent. We tested the perceptual contrast hypothesis in three language-production experiments. Experiment 1 revealed that speakers overspecify color adjectives in polychrome displays, whereas in monochrome displays they overspecified other properties that were contrastive. Further support for the contrast perception hypothesis comes from a re-analysis of previous work, which confirmed that color contrast elicits color overspecification when detected in a given display, but not when detected across monochrome trials. Experiment 2 revealed that even atypical colors (which are often overspecified) are only mentioned if there is color contrast. In Experiment 3, participants named a target color faster in monochrome than in polychrome displays, suggesting that the effect of color contrast is not analogous to ease of production. We conclude that the tendency to overspecify color in polychrome displays is not a bottom-up effect driven by the visual salience of color as a property, but possibly a learned communicative strategy. We discuss the implications of our account for pragmatic theories of referential communication and models of audience design, challenging the view that overspecification is a form of egocentric behavior.

    Additional information

    supplementary data
  • Long, M., Shukla, V., & Rubio-Fernandez, P. (2021). The development of simile comprehension: From similarity to scalar implicature. Child Development, 92(4), 1439-1457. doi:10.1111/cdev.13507.

    Abstract

    Similes require two different pragmatic skills: appreciating the intended similarity and deriving a scalar implicature (e.g., “Lucy is like a parrot” normally implies that Lucy is not a parrot), but previous studies overlooked this second skill. In Experiment 1, preschoolers (N = 48; ages 3–5) understood “X is like a Y” as an expression of similarity. In Experiment 2 (N = 99; ages 3–6, 13) and Experiment 3 (N = 201; ages 3–5 and adults), participants received metaphors (“Lucy is a parrot”) or similes (“Lucy is like a parrot”) as clues to select one of three images (a parrot, a girl or a parrot-looking girl). An early developmental trend revealed that 3-year-olds started deriving the implicature “X is not a Y,” whereas 5-year-olds performed like adults.
  • Lopopolo, A., Van de Bosch, A., Petersson, K. M., & Willems, R. M. (2021). Distinguishing syntactic operations in the brain: Dependency and phrase-structure parsing. Neurobiology of Language, 2(1), 152-175. doi:10.1162/nol_a_00029.

    Abstract

    Finding the structure of a sentence — the way its words hold together to convey meaning — is a fundamental step in language comprehension. Several brain regions, including the left inferior frontal gyrus, the left posterior superior temporal gyrus, and the left anterior temporal pole, are supposed to support this operation. The exact role of these areas is nonetheless still debated. In this paper we investigate the hypothesis that different brain regions could be sensitive to different kinds of syntactic computations. We compare the fit of phrase-structure and dependency structure descriptors to activity in brain areas using fMRI. Our results show a division between areas with regard to the type of structure computed, with the left ATP and left IFG favouring dependency structures and left pSTG favouring phrase structures.
  • Lowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B. and 1 moreLowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B., & Baseler, H. A. (2021). Structural differences across multiple visual cortical regions in the absence of cone function in congenital achromatopsia. Frontiers in Neuroscience, 15: 718958. doi:10.3389/fnins.2021.718958.

    Abstract

    Most individuals with congenital achromatopsia (ACHM) carry mutations that affect the retinal phototransduction pathway of cone photoreceptors, fundamental to both high acuity vision and colour perception. As the central fovea is occupied solely by cones, achromats have an absence of retinal input to the visual cortex and a small central area of blindness. Additionally, those with complete ACHM have no colour perception, and colour processing regions of the ventral cortex also lack typical chromatic signals from the cones. This study examined the cortical morphology (grey matter volume, cortical thickness, and cortical surface area) of multiple visual cortical regions in ACHM (n = 15) compared to normally sighted controls (n = 42) to determine the cortical changes that are associated with the retinal characteristics of ACHM. Surface-based morphometry was applied to T1-weighted MRI in atlas-defined early, ventral and dorsal visual regions of interest. Reduced grey matter volume in V1, V2, V3, and V4 was found in ACHM compared to controls, driven by a reduction in cortical surface area as there was no significant reduction in cortical thickness. Cortical surface area (but not thickness) was reduced in a wide range of areas (V1, V2, V3, TO1, V4, and LO1). Reduction in early visual areas with large foveal representations (V1, V2, and V3) suggests that the lack of foveal input to the visual cortex was a major driving factor in morphological changes in ACHM. However, the significant reduction in ventral area V4 coupled with the lack of difference in dorsal areas V3a and V3b suggest that deprivation of chromatic signals to visual cortex in ACHM may also contribute to changes in cortical morphology. This research shows that the congenital lack of cone input to the visual cortex can lead to widespread structural changes across multiple visual areas.

    Additional information

    table S1
  • Ludwig, A., Vernesi, C., Lieckfeldt, D., Lattenkamp, E. Z., Wiethölter, A., & Lutz, W. (2012). Origin and patterns of genetic diversity of German fallow deer as inferred from mitochondrial DNA. European Journal of Wildlife Research, 58(2), 495-501. doi:10.1007/s10344-011-0571-5.

    Abstract

    Although not native to Germany, fallow deer (Dama dama) are commonly found today, but their origin as well as the genetic structure of the founding members is still unclear. In order to address these aspects, we sequenced ~400 bp of the mitochondrial d-loop of 365 animals from 22 locations in nine German Federal States. Nine new haplotypes were detected and archived in GenBank. Our data produced evidence for a Turkish origin of the German founders. However, German fallow deer populations have complex patterns of mtDNA variation. In particular, three distinct clusters were identified: Schleswig-Holstein, Brandenburg/Hesse/Rhineland and Saxony/lower Saxony/Mecklenburg/Westphalia/Anhalt. Signatures of recent demographic expansions were found for the latter two. An overall pattern of reduced genetic variation was therefore accompanied by a relatively strong genetic structure, as highlighted by an overall Phict value of 0.74 (P < 0.001).
  • Lum, J. A., & Kidd, E. (2012). An examination of the associations among multiple memory systems, past tense, and vocabulary in typically developing 5-year-old children. Journal of Speech, Language, and Hearing Research, 55(4), 989-1006. doi:10.1044/1092-4388(2011/10-0137).
  • Lutzenberger, H., De Vos, C., Crasborn, O., & Fikkert, P. (2021). Formal variation in the Kata Kolok lexicon. Glossa: a journal of general linguistics, 6. doi:10.16995/glossa.5880.

    Abstract

    Sign language lexicons incorporate phonological specifications. Evidence from emerging sign languages suggests that phonological structure emerges gradually in a new language. In this study, we investigate variation in the form of signs across 20 deaf adult signers of Kata Kolok, a sign language that emerged spontaneously in a Balinese village community. Combining methods previously used for sign comparisons, we introduce a new numeric measure of variation. Our nuanced yet comprehensive approach to form variation integrates three levels (iconic motivation, surface realisation, feature differences) and allows for refinement through weighting the variation score by token and signer frequency. We demonstrate that variation in the form of signs appears in different degrees at different levels. Token frequency in a given dataset greatly affects how much variation can surface, suggesting caution in interpreting previous findings. Different sign variants have different scopes of use among the signing population, with some more widely used than others. Both frequency weightings (token and signer) identify dominant sign variants, i.e., sign forms that are produced frequently or by many signers. We argue that variation does not equal the absence of conventionalisation. Indeed, especially in micro-community sign languages, variation may be key to understanding patterns of language emergence.
  • MacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P. and 1 moreMacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P., & Wobber, V. (2012). How does cognition evolve? Phylogenetic comparative psychology. Animal Cognition, 15, 223-238. doi:10.1007/s10071-011-0448-8.

    Abstract

    Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.
  • Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in Psychology, 3, 376. doi:10.3389/fpsyg.2012.00376.

    Abstract

    During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
  • Majid, A. (2012). Current emotion research in the language sciences. Emotion Review, 4, 432-443. doi:10.1177/1754073912445827.

    Abstract

    When researchers think about the interaction between language and emotion, they typically focus on descriptive emotion words. This review demonstrates that emotion can interact with language at many levels of structure, from the sound patterns of a language to its lexicon and grammar, and beyond to how it appears in conversation and discourse. Findings are considered from diverse subfields across the language sciences, including cognitive linguistics, psycholinguistics, linguistic anthropology, and conversation analysis. Taken together, it is clear that emotional expression is finely tuned to language-specific structures. Future emotion research can better exploit cross-linguistic variation to unravel possible universal principles operating between language and emotion.
  • Majid, A. (2012). The role of language in a science of emotion [Comment]. Emotion review, 4, 380-381. doi:10.1177/1754073912445819.

    Abstract

    Emotion scientists often take an ambivalent stance concerning the role of language in a science of emotion. However, it is important for emotion researchers to contemplate some of the consequences of current practices
    for their theory building. There is a danger of an overreliance on the English language as a transparent window into emotion categories. More consideration has to be given to cross-linguistic comparison in the future so that models of language acquisition and of the language–cognition interface fit better the extant variation found in today’s peoples.
  • Majid, A., Boroditsky, L., & Gaby, A. (Eds.). (2012). Time in terms of space [Research topic] [Special Issue]. Frontiers in cultural psychology. Retrieved from http://www.frontiersin.org/cultural_psychology/researchtopics/Time_in_terms_of_space/755.

    Abstract

    This Research Topic explores the question: what is the relationship between representations of time and space in cultures around the world? This question touches on the broader issue of how humans come to represent and reason about abstract entities – things we cannot see or touch. Time is a particularly opportune domain to investigate this topic. Across cultures, people use spatial representations for time, for example in graphs, time-lines, clocks, sundials, hourglasses, and calendars. In language, time is also heavily related to space, with spatial terms often used to describe the order and duration of events. In English, for example, we might move a meeting forward, push a deadline back, attend a long concert or go on a short break. People also make consistent spatial gestures when talking about time, and appear to spontaneously invoke spatial representations when processing temporal language. A large body of evidence suggests a close correspondence between temporal and spatial language and thought. However, the ways that people spatialize time can differ dramatically across languages and cultures. This research topic identifies and explores some of the sources of this variation, including patterns in spatial thinking, patterns in metaphor, gesture and other cultural systems. This Research Topic explores how speakers of different languages talk about time and space and how they think about these domains, outside of language. The Research Topic invites papers exploring the following issues: 1. Do the linguistic representations of space and time share the same lexical and morphosyntactic resources? 2. To what extent does the conceptualization of time follow the conceptualization of space?
  • Mak, M., & Willems, R. M. (2021). Eyelit: Eye movement and reader response data during literary reading. Journal of open humanities data, 7: 25. doi:10.5334/johd.49.

    Abstract

    An eye-tracking data set is described of 102 participants reading three Dutch literary short stories each (7790 words in total per participant). The pre-processed data set includes (1) Fixation report, (2) Saccade report, (3) Interest Area report, (4) Trial report (aggregated data for each page), (5) Sample report (sampling rate = 500 Hz), (6) Questionnaire data on reading experiences and participant characteristics, and (7) word characteristics for all words (with the potential of calculating additional word characteristics). It is stored on DANS, and can be used to study word characteristics or literary reading and all facets of eye movements.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.

    Abstract

    Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.

    Additional information

    supplementary materials
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.

    Abstract

    Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
  • Martin, A. E., Nieuwland, M. S., & Carreiras, M. (2012). Event-related brain potentials index cue-based retrieval interference during sentence comprehension. NeuroImage, 59(2), 1859-1869. doi:10.1016/j.neuroimage.2011.08.057.

    Abstract

    Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a (‘another’). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000 ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences.
  • Matić, D. (2012). Review of: Assertion by Mark Jary, Palgrave Macmillan, 2010 [Web Post]. The LINGUIST List. Retrieved from http://linguistlist.org/pubs/reviews/get-review.cfm?SubID=4547242.

    Abstract

    Even though assertion has held centre stage in much philosophical and linguistic theorising on language, Mark Jary’s ‘Assertion’ represents the first book-length treatment of the topic. The content of the book is aptly described by the author himself: ''This book has two aims. One is to bring together and discuss in a systematic way a range of perspectives on assertion: philosophical, linguistic and psychological. [...] The other is to present a view of the pragmatics of assertion, with particular emphasis on the contribution of the declarative mood to the process of utterance interpretation.'' (p. 1). The promise contained in this introductory note is to a large extent fulfilled: the first seven chapters of the book discuss many of the relevant philosophical and linguistic approaches to assertion and at the same time provide the background for the presentation of Jary's own view on the pragmatics of declaratives, presented in the last (and longest) chapter.
  • McConnell, K., & Blumenthal-Dramé, A. (2021). Usage-Based Individual Differences in the Probabilistic Processing of Multi-Word Sequences. Frontiers in Communication, 6: 703351. doi:10.3389/fcomm.2021.703351.

    Abstract

    While it is widely acknowledged that both predictive expectations and retrodictive
    integration influence language processing, the individual differences that affect these
    two processes and the best metrics for observing them have yet to be fully described.
    The present study aims to contribute to the debate by investigating the extent to which
    experienced-based variables modulate the processing of word pairs (bigrams).
    Specifically, we investigate how age and reading experience correlate with lexical
    anticipation and integration, and how this effect can be captured by the metrics of
    forward and backward transition probability (TP). Participants read more and less
    strongly associated bigrams, paired to control for known lexical covariates such as
    bigram frequency and meaning (i.e., absolute control, total control, absolute silence,
    total silence) in a self-paced reading (SPR) task. They additionally completed
    assessments of exposure to print text (Author Recognition Test, Shipley vocabulary
    assessment, Words that Go Together task) and provided their age. Results show that
    both older age and lesser reading experience individually correlate with stronger TP
    effects. Moreover, TP effects differ across the spillover region (the two words following
    the noun in the bigram)
  • McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.

    Abstract

    An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.
  • McQueen, J. M., Tyler, M., & Cutler, A. (2012). Lexical retuning of children’s speech perception: Evidence for knowledge about words’ component sounds. Language Learning and Development, 8, 317-339. doi:10.1080/15475441.2011.641887.

    Abstract

    Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment 1 replicated prior studies on lexically guided retuning of speech perception in adults, with a picture-verification methodology suitable for children. One participant group heard an ambiguous fricative ([s/f]) replacing /f/ (e.g., in words like giraffe); another group heard [s/f] replacing /s/ (e.g., in platypus). The first group subsequently identified more tokens on a Simpie-[s/f]impie-Fimpie toy-name continuum as Fimpie. Experiments 2 and 3 found equivalent lexically guided retuning effects in 12- and 6-year-olds. Children aged 6 have all that is needed for adjusting to talker variation in speech: detailed and abstract phonological representations and the ability to apply them during spoken-word recognition.

    Files private

    Request files
  • Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.

    Abstract

    Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band.
  • Melnychuk, T., Galke, L., Seidlmayer, E., Förster, K. U., Tochtermann, K., & Schultz, C. (2021). Früherkennung wissenschaftlicher Konvergenz im Hochschulmanagement. Hochschulmanagement, 16(1), 24-28.

    Abstract

    It is crucial for universities to recognize early signals of scientific convergence. Scientific convergence describes a dynamic pattern where the distance between different fields of knowledge shrinks over time. This knowledge
    space is beneficial to radical innovations and new promising research topics. Research in converging areas of knowledge can therefore allow universities to establish a leading position in the science community.
    The Q-AKTIV project develops a new approach on the basis of machine learning to identify scientific convergence at an early stage. In this work, we briefly present this approach and the first results of empirical validation. We discuss the benefits of an instrument building on our approach for the strategic management of universities and
    other research institutes.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.

    Abstract

    In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains.
  • Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.

    Abstract

    Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics.
  • Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Towards a neural basis of interactive alignment in conversation. Frontiers in Human Neuroscience, 6, 185. doi:10.3389/fnhum.2012.00185.

    Abstract

    The interactive-alignment account of dialogue proposes that interlocutors achieve conversational success by aligning their understanding of the situation under discussion. Such alignment occurs because they prime each other at different levels of representation (e.g., phonology, syntax, semantics), and this is possible because these representations are shared across production and comprehension. In this paper, we briefly review the behavioral evidence, and then consider how findings from cognitive neuroscience might lend support to this account, on the assumption that alignment of neural activity corresponds to alignment of mental states. We first review work supporting representational parity between production and comprehension, and suggest that neural activity associated with phonological, lexical, and syntactic aspects of production and comprehension are closely related. We next consider evidence for the neural bases of the activation and use of situation models during production and comprehension, and how these demonstrate the activation of non-linguistic conceptual representations associated with language use. We then review evidence for alignment of neural mechanisms that are specific to the act of communication. Finally, we suggest some avenues of further research that need to be explored to test crucial predictions of the interactive alignment account.
  • Menks, W. M., Fehlbaum, L. V., Borbás, R., Sterzer, P., Stadler, C., & Raschle, N. M. (2021). Eye gaze patterns and functional brain responses during emotional face processing in adolescents with conduct disorder. NeuroImage: Clinical, 29: 102519. doi:10.1016/j.nicl.2020.102519.

    Abstract

    Background: Conduct disorder (CD) is characterized by severe aggressive and antisocial behavior. Initial evidence
    suggests neural deficits and aberrant eye gaze pattern during emotion processing in CD; both concepts, however,
    have not yet been studied simultaneously. The present study assessed the functional brain correlates of emotional
    face processing with and without consideration of concurrent eye gaze behavior in adolescents with CD
    compared to typically developing (TD) adolescents.
    Methods: 58 adolescents (23CD/35TD; average age = 16 years/range = 14–19 years) underwent an implicit
    emotional face processing task. Neuroimaging analyses were conducted for a priori-defined regions of interest
    (insula, amygdala, and medial orbitofrontal cortex) and using a full-factorial design assessing the main effects of
    emotion (neutral, anger, fear), group and the interaction thereof (cluster-level, p < .05 FWE-corrected) with and
    without consideration of concurrent eye gaze behavior (i.e., time spent on the eye region).
    Results: Adolescents with CD showed significant hypo-activations during emotional face processing in right
    anterior insula compared to TD adolescents, independent of the emotion presented. In-scanner eye-tracking data
    revealed that adolescents with CD spent significantly less time on the eye, but not mouth region. Correcting for
    eye gaze behavior during emotional face processing reduced group differences previously observed for right
    insula.
    Conclusions: Atypical insula activation during emotional face processing in adolescents with CD may partly be
    explained by attentional mechanisms (i.e., reduced gaze allocation to the eyes, independent of the emotion
    presented). An increased understanding of the mechanism causal for emotion processing deficits observed in CD
    may ultimately aid the development of personalized intervention programs

    Additional information

    1-s2.0-S2213158220303569-mmc1.doc
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Conducting language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Collabra: Psychology, 7(1): 29935. doi:10.1525/collabra.29935.

    Abstract

    Few web-based experiments have explored spoken language production, perhaps due to concerns of data quality, especially for measuring onset latencies. The present study highlights how speech production research can be done outside of the laboratory by measuring utterance durations and speech fluency in a multiple-object naming task when examining two effects related to lexical selection: semantic context and name agreement. A web-based modified blocked-cyclic naming paradigm was created, in which participants named a total of sixteen simultaneously presented pictures on each trial. The pictures were either four tokens from the same semantic category (homogeneous context), or four tokens from different semantic categories (heterogeneous context). Name agreement of the pictures was varied orthogonally (high, low). In addition to onset latency, five dependent variables were measured to index naming performance: accuracy, utterance duration, total pause time, the number of chunks (word groups pronounced without intervening pauses), and first chunk length. Bayesian analyses showed effects of semantic context and name agreement for some of the dependent measures, but no interaction. We discuss the methodological implications of the current study and make best practice recommendations for spoken language production research in an online environment.
  • He, J., Meyer, A. S., & Brehm, L. (2021). Concurrent listening affects speech planning and fluency: The roles of representational similarity and capacity limitation. Language, Cognition and Neuroscience, 36(10), 1258-1280. doi:10.1080/23273798.2021.1925130.

    Abstract

    In a novel continuous speaking-listening paradigm, we explored how speech planning was affected by concurrent listening. In Experiment 1, Dutch speakers named pictures with high versus low name agreement while ignoring Dutch speech, Chinese speech, or eight-talker babble. Both name agreement and type of auditory input influenced response timing and chunking, suggesting that representational similarity impacts lexical selection and the scope of advance planning in utterance generation. In Experiment 2, Dutch speakers named pictures with high or low name agreement while either ignoring Dutch words, or attending to them for a later memory test. Both name agreement and attention demand influenced response timing and chunking, suggesting that attention demand impacts lexical selection and the planned utterance units in each response. The study indicates that representational similarity and attention demand play important roles in linguistic dual-task interference, and the interference can be managed by adapting when and how to plan speech.

    Additional information

    supplemental material
  • Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.

    Abstract

    Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Minagawa-Kawai, Y., Cristià, A., & Dupoux, E. (2012). Erratum to “Cerebral lateralization and early speech acquisition: A developmental scenario” [Dev. Cogn. Neurosci. 1 (2011) 217–232]. Developmental Cognitive Neuroscience, 2(1), 194-195. doi:10.1016/j.dcn.2011.07.011.

    Abstract

    Refers to Yasuyo Minagawa-Kawai, Alejandrina Cristià, Emmanuel Dupoux "Cerebral lateralization and early speech acquisition: A developmental scenario" Developmental Cognitive Neuroscience, Volume 1, Issue 3, July 2011, Pages 217-232
  • Misersky, J., Slivac, K., Hagoort, P., & Flecken, M. (2021). The State of the Onion: Grammatical aspect modulates object representation during event comprehension. Cognition, 214: 104744. doi:10.1016/j.cognition.2021.104744.

    Abstract

    The present ERP study assessed whether grammatical aspect is used as a cue in online event comprehension, in particular when reading about events in which an object is visually changed. While perfective aspect cues holistic event representations, including an event's endpoint, progressive aspect highlights intermediate phases of an event. In a 2 × 3 design, participants read SVO sentences describing a change-of-state event (e.g., to chop an onion), with grammatical Aspect manipulated (perfective “chopped” vs progressive “was chopping”). Thereafter, they saw a Picture of an object either having undergone substantial state-change (SC; a chopped onion), no state-change (NSC; an onion in its original state) or an unrelated object (U; a cactus, acting as control condition). Their task was to decide whether the object in the Picture was mentioned in the sentence. We focused on N400 modulation, with ERPs time-locked to picture onset. U pictures elicited an N400 response as expected, suggesting detection of categorical mismatches in object type. For SC and NSC pictures, a whole-head follow-up analysis revealed a P300, implying people were engaged in detailed evaluation of pictures of matching objects. SC pictures received most positive responses overall. Crucially, there was an interaction of Aspect and Picture: SC pictures resulted in a higher amplitude P300 after sentences in the perfective compared to the progressive. Thus, while the perfective cued for a holistic event representation, including the resultant state of the affected object (i.e., the chopped onion) constraining object representations online, the progressive defocused event completion and object-state change. Grammatical aspect thus guided online event comprehension by cueing the visual representation(s) of an object's state.
  • Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.

    Abstract

    We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
    gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.
  • Misra, S. (2021). Real-time dynamic fur and hair simulation using verlet integration. International Journal of Scientific and Research Publication (IJSRP), 11(2), 444-450. doi:10.29322/IJSRP.11.02.2021.p11053.

    Abstract

    Throughout the history of game development, the physics behind the real-time hair simulation has continued to pose a challenge due to lack of availability of computational resources required by the system. Unlike rendering an animation, where the requirement of real-time simulation is absent, game hair physics needs more efficiency when it comes to utilization of computational resources. Generally, for making a hair strand mesh, a cylinder or a capsule mesh is an obvious choice despite its requirement of a higher number of draw calls or resources. This paper proposes the use of an innovative and highly efficient use of quad polygons, whose normals face the render in conjunction with the use of Verlet integration, which delivers optimal results by keeping the frames per second (FPS) stable. Additionally, the proposed physics also allows for physical forces, such as gravity and wind, to affect hair movement as well as simulate a natural curl in the hair strand.
  • Mitterer, H. (Ed.). (2012). Ecological aspects of speech perception [Research topic] [Special Issue]. Frontiers in Cognition.

    Abstract

    Our knowledge of speech perception is largely based on experiments conducted with carefully recorded clear speech presented under good listening conditions to undistracted listeners - a near-ideal situation, in other words. But the reality poses a set of different challenges. First of all, listeners may need to divide their attention between speech comprehension and another task (e.g., driving). Outside the laboratory, the speech signal is often slurred by less than careful pronunciation and the listener has to deal with background noise. Moreover, in a globalized world, listeners need to understand speech in more than their native language. Relatedly, the speakers we listen to often have a different language background so we have to deal with a foreign or regional accent we are not familiar with. Finally, outside the laboratory, speech perception is not an end in itself, but rather a mean to contribute to a conversation. Listeners do not only need to understand the speech they are hearing, they also need to use this information to plan and time their own responses. For this special topic, we invite papers that address any of these ecological aspects of speech perception.
  • Mitterer, H., & Tuinman, A. (2012). The role of native-language knowledge in the perception of casual speech in a second language. Frontiers in Psychology, 3, 249. doi:10.3389/fpsyg.2012.00249.

    Abstract

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word-recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual-speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners' performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner's native language, similar to what has been observed for phoneme contrasts.
  • Montero-Melis, G. (2021). Consistency in motion event encoding across languages. Frontiers in Psychology, 12: 625153. doi:10.3389/fpsyg.2021.625153.

    Abstract

    Syntactic templates serve as schemas, allowing speakers to describe complex events in a systematic fashion. Motion events have long served as a prime example of how different languages favor different syntactic frames, in turn biasing their speakers towards different event conceptualizations. However, there is also variability in how motion events are syntactically framed within languages. Here we measure the consistency in event encoding in two languages, Spanish and Swedish. We test a dominant account in the literature, namely that variability within a language can be explained by specific properties of the events. This event-properties account predicts that descriptions of one and the same event should be consistent within a language, even in languages where there is overall variability in the use of syntactic frames. Spanish and Swedish speakers (N=84) described 32 caused motion events. While the most frequent syntactic framing in each language was as expected based on typology (Spanish: verb-framed, Swedish: satellite-framed, cf. Talmy, 2000), Swedish descriptions were substantially more consistent than Spanish descriptions. Swedish speakers almost invariably encoded all events with a single syntactic frame and systematically conveyed manner of motion. Spanish descriptions, in contrast, varied much more regarding syntactic framing and expression of manner. Crucially, variability in Spanish descriptions was not mainly a function of differences between events, as predicted by the event-properties account. Rather, Spanish variability in syntactic framing was driven by speaker biases. A similar picture arose for whether Spanish descriptions expressed manner information or not: Even after accounting for the effect of syntactic choice, a large portion of the variance in Spanish manner encoding remained attributable to differences among speakers. The results show that consistency in motion event encoding starkly differs across languages: Some languages (like Swedish) bias their speakers towards a particular linguistic event schema much more than others (like Spanish). Implications of these findings are discussed with respect to the typology of event framing, theories on the relationship between language and thought, and speech planning. In addition, the tools employed here to quantify variability can be applied to other domains of language.

    Additional information

    data and analysis scripts
  • Moreno Santillán, D. D., Lama, T. M., Gutierrez Guerrero, Y. T., Brown, A. M., Donat, P., Zhao, H., Rossiter, S. J., Yohe, L. R., Potter, J. H., Teeling, E. C., Vernes, S. C., Davies, K. T. J., Myers, E., Hughes, G. M., Huang, Z., Hoffmann, F., Corthals, A. P., Ray, D. A., & Dávalos, L. M. (2021). Large‐scale genome sampling reveals unique immunity and metabolic adaptations in bats. Molecular Ecology, 30(23), 6449-6467. doi:10.1111/mec.16027.

    Abstract

    Comprising more than 1,400 species, bats possess adaptations unique among mammals including powered flight, unexpected longevity, and extraordinary immunity. Some of the molecular mechanisms underlying these unique adaptations includes DNA repair, metabolism and immunity. However, analyses have been limited to a few divergent lineages, reducing the scope of inferences on gene family evolution across the Order Chiroptera. We conducted an exhaustive comparative genomic study of 37 bat species, one generated in this study, encompassing a large number of lineages, with a particular emphasis on multi-gene family evolution across immune and metabolic genes. In agreement with previous analyses, we found lineage-specific expansions of the APOBEC3 and MHC-I gene families, and loss of the proinflammatory PYHIN gene family. We inferred more than 1,000 gene losses unique to bats, including genes involved in the regulation of inflammasome pathways such as epithelial defense receptors, the natural killer gene complex and the interferon-gamma induced pathway. Gene set enrichment analyses revealed genes lost in bats are involved in defense response against pathogen-associated molecular patterns and damage-associated molecular patterns. Gene family evolution and selection analyses indicate bats have evolved fundamental functional differences compared to other mammals in both innate and adaptive immune system, with the potential to enhance anti-viral immune response while dampening inflammatory signaling. In addition, metabolic genes have experienced repeated expansions related to convergent shifts to plant-based diets. Our analyses support the hypothesis that, in tandem with flight, ancestral bats had evolved a unique set of immune adaptations whose functional implications remain to be explored.

    Additional information

    supplementary material table S1-S18
  • Morgan, A., Braden, R., Wong, M. M. K., Colin, E., Amor, D., Liégeois, F., Srivastava, S., Vogel, A., Bizaoui, V., Ranguin, K., Fisher, S. E., & Van Bon, B. W. (2021). Speech and language deficits are central to SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1216-1225. doi:10.1038/s41431-021-00894-x.

    Abstract

    Expressive communication impairment is associated with haploinsufficiency of SETBP1, as reported in small case series. Heterozygous pathogenic loss-of-function (LoF) variants in SETBP1 have also been identified in independent cohorts ascertained for childhood apraxia of speech (CAS), warranting further investigation of the roles of this gene in speech development. Thirty-one participants (12 males, aged 0; 8–23; 2 years, 28 with pathogenic SETBP1 LoF variants, 3 with 18q12.3 deletions) were assessed for speech, language and literacy abilities. Broader development was examined with standardised motor, social and daily life skills assessments. Gross and fine motor deficits (94%) and intellectual impairments (68%) were common. Protracted and aberrant speech development was consistently seen, regardless of motor or intellectual ability. We expand the linguistic phenotype associated with SETBP1 LoF syndrome (SETBP1 haploinsufficiency disorder), revealing a striking speech presentation that implicates both motor (CAS, dysarthria) and language (phonological errors) systems, with CAS (80%) being the most common diagnosis. In contrast to past reports, the understanding of language was rarely better preserved than language expression (29%). Language was typically low, to moderately impaired, with commensurate expression and comprehension ability. Children were sociable with a strong desire to communicate. Minimally verbal children (32%) augmented speech with sign language, gestures or digital devices. Overall, relative to general development, spoken language and literacy were poorer than social, daily living, motor and adaptive behaviour skills. Our findings show that poor communication is a central feature of SETBP1 haploinsufficiency disorder, confirming this gene as a strong candidate for speech and language disorders.
  • Moseley, R., Carota, F., Hauk, O., Mohr, B., & Pulvermüller, F. (2012). A role for the motor system in binding abstract emotional meaning. Cerebral Cortex, 22(7), 1634-1647. doi:10.1093/cercor/bhr238.

    Abstract

    Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states.
  • Nielsen, A. K. S., & Dingemanse, M. (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52-72. doi:10.1177/0023830920914339.

    Abstract

    Interest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language.
  • Nieuwland, M. S. (2021). How ‘rational’ is semantic prediction? A critique and re-analysis of. Cognition, 215: 104848. doi:10.1016/j.cognition.2021.104848.

    Abstract

    In a recent article in Cognition, Delaney-Busch et al. (2019) claim evidence for ‘rational’, Bayesian adaptation of semantic predictions, using ERP data from Lau, Holcomb, and Kuperberg (2013). Participants read associatively related and unrelated prime-target word pairs in a first block with only 10% related trials and a second block with 50%. Related words elicited smaller N400s than unrelated words, and this difference was strongest in the second block, suggesting greater engagement in predictive processing. Using a rational adaptor model, Delaney-Busch et al. argue that the stronger N400 reduction for related words in the second block developed as a function of the number of related trials, and concluded therefore that participants predicted related words more strongly when their predictions were fulfilled more often. In this critique, I discuss two critical flaws in their analyses, namely the confounding of prediction effects with those of lexical frequency and the neglect of data from the first block. Re-analyses suggest a different picture: related words by themselves did not yield support for their conclusion, and the effect of relatedness gradually strengthened in othe two blocks in a similar way. Therefore, the N400 did not yield evidence that participants rationally adapted their semantic predictions. Within the framework proposed by Delaney-Busch et al., presumed semantic predictions may even be thought of as ‘irrational’. While these results yielded no evidence for rational or probabilistic prediction, they do suggest that participants became increasingly better at predicting target words from prime words.
  • Nieuwland, M. S. (2021). Commentary: Rational adaptation in lexical prediction: The influence of prediction strength. Frontiers in Psychology, 12: 735849. doi:10.3389/fpsyg.2021.735849.
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2012). Brain regions that process case: Evidence from basque. Human Brain Mapping, 33(11), 2509-2520. doi:10.1002/hbm.21377.

    Abstract

    The aim of this event-related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well-understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions-of-interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.
  • Nieuwland, M. S. (2012). Establishing propositional truth-value in counterfactual and real-world contexts during sentence comprehension: Differential sensitivity of the left and right inferior frontal gyri. NeuroImage, 59(4), 3433-3440. doi:10.1016/j.neuroimage.2011.11.018.

    Abstract

    What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences (“If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America”) and real-world sentences (“Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia”) that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension.
  • Nieuwland, M. S., & Martin, A. E. (2012). If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension. Cognition, 122(1), 102-109. doi:10.1016/j.cognition.2011.09.001.

    Abstract

    Propositional truth-value can be a defining feature of a sentence’s relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., “If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America”) and real-world sentences (e.g., “Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia”) alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Allophonic mode of speech perception in Dutch children at risk for dyslexia: A longitudinal study. Research in developmental disabilities, 33, 1469-1483. doi:10.1016/j.ridd.2012.03.021.

    Abstract

    There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Neural evidence of allophonic perception in children at risk for dyslexia. Neuropsychologia, 50, 2010-2017. doi:10.1016/j.neuropsychologia.2012.04.026.

    Abstract

    Learning to read is a complex process that develops normally in the majority of children and requires the mapping of graphemes to their corresponding phonemes. Problems with the mapping process nevertheless occur in about 5% of the population and are typically attributed to poor phonological representations, which are — in turn — attributed to underlying speech processing difficulties. We examined auditory discrimination of speech sounds in 6-year-old beginning readers with a familial risk of dyslexia (n=31) and no such risk (n=30) using the mismatch negativity (MMN). MMNs were recorded for stimuli belonging to either the same phoneme category (acoustic variants of/bə/) or different phoneme categories (/bə/vs./də/). Stimuli from different phoneme categories elicited MMNs in both the control and at-risk children, but the MMN amplitude was clearly lower in the at-risk children. In contrast, the stimuli from the same phoneme category elicited an MMN in only the children at risk for dyslexia. These results show children at risk for dyslexia to be sensitive to acoustic properties that are irrelevant in their language. Our findings thus suggest a possible cause of dyslexia in that they show 6-year-old beginning readers with at least one parent diagnosed with dyslexia to have a neural sensitivity to speech contrasts that are irrelevant in the ambient language. This sensitivity clearly hampers the development of stable phonological representations and thus leads to significant reading impairment later in life.
  • Nora, A., Hultén, A., Karvonen, L., Kim, J.-Y., Lehtonen, M., Yli-Kaitala, H., Service, E., & Salmelin, R. (2012). Long-term phonological learning begins at the level of word form. NeuroImage, 63, 789-799. doi:10.1016/j.neuroimage.2012.07.026.

    Abstract

    Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300–600 ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600–1200 ms on both days. The results point to item-level learning of novel whole-word representations.
  • Norris, D., & Cutler, A. (2021). More why, less how: What we need from models of cognition. Cognition, 213: 104688. doi:10.1016/j.cognition.2021.104688.

    Abstract

    Science regularly experiences periods in which simply describing the world is prioritised over attempting to explain it. Cognition, this journal, came into being some 45 years ago as an attempt to lay one such period to rest; without doubt, it has helped create the current cognitive science climate in which theory is decidedly welcome. Here we summarise the reasons why a theoretical approach is imperative in our field, and call attention to some potentially counter-productive trends in which cognitive models are concerned too exclusively with how processes work at the expense of why the processes exist in the first place and thus what the goal of modelling them must be.
  • Nota, N., Trujillo, J. P., & Holler, J. (2021). Facial signals and social actions in multimodal face-to-face interaction. Brain Sciences, 11(8): 1017. doi:10.3390/brainsci11081017.

    Abstract

    In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
  • Nozais, V., Forkel, S. J., Foulon, C., Petit, L., & Thiebaut de Schotten, M. (2021). Functionnectome as a framework to analyse the contribution of brain circuits to fMRI. Communications Biology, 4: 1035. doi:10.1038/s42003-021-02530-2.

    Abstract

    In recent years, the field of functional neuroimaging has moved away from a pure localisationist approach of isolated functional brain regions to a more integrated view of these regions within functional networks. However, the methods used to investigate functional networks rely on local signals in grey matter and are limited in identifying anatomical circuitries supporting the interaction between brain regions. Mapping the brain circuits mediating the functional signal between brain regions would propel our understanding of the brain’s functional signatures and dysfunctions. We developed a method to unravel the relationship between brain circuits and functions: The Functionnectome. The Functionnectome combines the functional signal from fMRI with white matter circuits’ anatomy to unlock and chart the first maps of functional white matter. To showcase this method’s versatility, we provide the first functional white matter maps revealing the joint contribution of connected areas to motor, working memory, and language functions. The Functionnectome comes with an open-source companion software and opens new avenues into studying functional networks by applying the method to already existing datasets and beyond task fMRI.

    Additional information

    supplementary information
  • Ntemou, E., Ohlerth, A.-K., Ille, S., Krieg, S., Bastiaanse, R., & Rofes, A. (2021). Mapping Verb Retrieval With nTMS: The Role of Transitivity. Frontiers in Human Neuroscience, 15: 719461. doi:10.3389/fnhum.2021.719461.

    Abstract

    Navigated Transcranial Magnetic Stimulation (nTMS) is used to understand the cortical organization of language in preparation for the surgical removal of a brain tumor. Action naming with finite verbs can be employed for that purpose, providing additional information to object naming. However, little research has focused on the properties of the verbs that are used in action naming tasks, such as their status as transitive (taking an object; e.g., to read) or intransitive (not taking an object; e.g., to wink). Previous neuroimaging data show higher activation for transitive compared to intransitive verbs in posterior perisylvian regions bilaterally. In the present study, we employed nTMS and production of finite verbs to investigate the cortical underpinnings of transitivity. Twenty neurologically healthy native speakers of German participated in the study. They underwent language mapping in both hemispheres with nTMS. The action naming task with finite verbs consisted of transitive (e.g., The man reads the book) and intransitive verbs (e.g., The woman winks) and was controlled for relevant psycholinguistic variables. Errors were classified in four different error categories (i.e., non-linguistic errors, grammatical errors, lexico-semantic errors and, errors at the sound level) and were analyzed quantitatively. We found more nTMS-positive points in the left hemisphere, particularly in the left parietal lobe for the production of transitive compared to intransitive verbs. These positive points most commonly corresponded to lexico-semantic errors. Our findings are in line with previous aphasia and neuroimaging studies, suggesting that a more widespread network is used for the production of verbs with a larger number of arguments (i.e., transitives). The higher number of lexico-semantic errors with transitive compared to intransitive verbs in the left parietal lobe supports previous claims for the role of left posterior areas in the retrieval of argument structure information.
  • Ohlerth, A.-K., Bastiaanse, R., Negwer, C., Sollmann, N., Schramm, S., Schroder, A., & Krieg, S. M. (2021). Benefit of action naming over object naming for visualization of subcortical language pathways in navigated transcranial magnetic stimulation-based diffusion tensor imaging-fiber tracking. Frontiers in Human Neuroscience, 15: 748274. doi:10.3389/fnhum.2021.748274.

    Abstract

    Visualization of functionally significant subcortical white matter fibers is needed in neurosurgical procedures in order to avoid damage to the language network during resection. In an effort to achieve this, positive cortical points revealed during preoperative language mapping with navigated transcranial magnetic stimulation (nTMS) can be employed as regions of interest (ROIs) for diffusion tensor imaging (DTI) fiber tracking. However, the effect that the use of different language tasks has on nTMS mapping and subsequent DTI-fiber tracking remains unexplored. The visualization of ventral stream tracts with an assumed lexico-semantic role may especially benefit from ROIs delivered by the lexico-semantically demanding verb task, Action Naming. In a first step, bihemispheric nTMS language mapping was administered in 18 healthy participants using the standard task Object Naming and the novel task Action Naming to trigger verbs in a small sentence context. Cortical areas in which nTMS induced language errors were identified as language-positive cortical sites. In a second step, nTMS-based DTI-fiber tracking was conducted using solely these language-positive points as ROIs. The ability of the two tasks’ ROIs to visualize the dorsal tracts Arcuate Fascicle and Superior Longitudinal Fascicle, the ventral tracts Inferior Longitudinal Fascicle, Uncinate Fascicle, and Inferior Fronto-Occipital Fascicle, the speech-articulatory Cortico-Nuclear Tract, and interhemispheric commissural fibers was compared in both hemispheres. In the left hemisphere, ROIs of Action Naming led to a significantly higher fraction of overall visualized tracts, specifically in the ventral stream’s Inferior Fronto-Occipital and Inferior Longitudinal Fascicle. No difference was found between tracking with Action Naming vs. Object Naming seeds for dorsal stream tracts, neither for the speech-articulatory tract nor the inter-hemispheric connections. While the two tasks appeared equally demanding for phonological-articulatory processes, ROI seeding through the task Action Naming seemed to better visualize lexico-semantic tracts in the ventral stream. This distinction was not evident in the right hemisphere. However, the distribution of tracts exposed was, overall, mirrored relative to those in the left hemisphere network. In presurgical practice, mapping and tracking of language pathways may profit from these findings and should consider inclusion of the Action Naming task, particularly for lesions in ventral subcortical regions.
  • Ohlerth, A.-K., Bastiaanse, R., Negwer, C., Sollmann, N., Schramm, S., Schroder, A., & Krieg, S. (2021). Bihemispheric Navigated Transcranial Magnetic Stimulation Mapping for Action Naming Compared to Object Naming in Sentence Context. Brain Sciences, 11(9): 1190. doi:10.3390/brainsci11091190.

    Abstract

    Preoperative language mapping with navigated transcranial magnetic stimulation (nTMS) is currently based on the disruption of performance during object naming. The resulting cortical language maps, however, lack accuracy when compared to intraoperative mapping. The question arises whether nTMS results can be improved, when another language task is considered, involving verb retrieval in sentence context. Twenty healthy German speakers were tested with object naming and a novel action naming task during nTMS language mapping. Error rates and categories in both hemispheres were compared. Action naming showed a significantly higher error rate than object naming in both hemispheres. Error category comparison revealed that this discrepancy stems from more lexico-semantic errors during action naming, indicating lexico-semantic retrieval of the verb being more affected than noun retrieval. In an area-wise comparison, higher error rates surfaced in multiple right-hemisphere areas, but only trends in the left ventral postcentral gyrus and middle superior temporal gyrus. Hesitation errors contributed significantly to the error count, but did not dull the mapping results. Inclusion of action naming coupled with a detailed error analysis may be favorable for nTMS mapping and ultimately improve accuracy in preoperative planning. Moreover, the results stress the recruitment of both left- and right-hemispheric areas during naming.

Share this page