Publications

Displaying 301 - 400 of 1356
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Eisner, F. (2012). Competition in the acoustic encoding of emotional speech. In L. McCrohon (Ed.), Five approaches to language evolution. Proceedings of the workshops of the 9th International Conference on the Evolution of Language (pp. 43-44). Tokyo: Evolang9 Organizing Committee.

    Abstract

    1. Introduction Speech conveys not only linguistic meaning but also paralinguistic information, such as features of the speaker’s social background, physiology, and emotional state. Linguistic and paralinguistic information is encoded in speech by using largely the same vocal apparatus and both are transmitted simultaneously in the acoustic signal, drawing on a limited set of acoustic cues. How this simultaneous encoding is achieved, how the different types of information are disentangled by the listener, and how much they interfere with one another is presently not well understood. Previous research has highlighted the importance of acoustic source and filter cues for emotion and linguistic encoding respectively, which may suggest that the two types of information are encoded independently of each other. However, those lines of investigation have been almost completely disconnected (Murray & Arnott, 1993).
  • Eisner, F. (2012). Perceptual learning in speech. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 16 (2nd. ed., pp. 2583-2584). Berlin: Springer.

    Abstract

    Definition Perceptual learning in speech describes a change in the mapping from acoustic cues in the speech signal to abstract linguistic representations. Learning leads to a lasting benefit to the listener by improving speech comprehension. The change can occur as a response to a specific feature (such as a talker- or accent idiosyncrasy) or to a global degradation of the signal (such as in synthesized or compressed speech). In perceptual learning, a top-down process is involved in causing the change, whereas purely bottom-up, signal-driven phenomena are considered to be adaptation.
  • Elbers, W., Broeder, D., & Van Uytvanck, D. (2012). Proper language resource centers. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3260-3263). European Language Resources Association (ELRA).

    Abstract

    Language resource centers allow researchers to reliably deposit their structured data together with associated meta data and run services operating on this deposited data. We are looking into possibilities to create long-term persistency of both the deposited data and the services operating on this data. Challenges, both technical and non-technical, that need to be solved are the need to replicate more than just the data, proper identification of the digital objects in a distributed environment by making use of persistent identifiers and the set-up of a proper authentication and authorization domain including the management of the authorization information on the digital objects. We acknowledge the investment that most language resource centers have made in their current infrastructure. Therefore one of the most important requirements is the loose coupling with existing infrastructures without the need to make many changes. This shift from a single language resource center into a federated environment of many language resource centers is discussed in the context of a real world center: The Language Archive supported by the Max Planck Institute for Psycholinguistics.
  • Ellis-Davies, K., Sakkalou, E., Fowler, N., Hilbrink, E., & Gattis, M. (2012). CUE: The continuous unified electronic diary method. Behavior Research Methods, 44, 1063-1078. doi:10.3758/s13428-012-0205-1.

    Abstract

    In the present article, we introduce the continuous unified electronic (CUE) diary method, a longitudinal, event-based, electronic parent report method that allows real-time recording of infant and child behavior in natural contexts. Thirty-nine expectant mothers were trained to identify and record target behaviors into programmed handheld computers. From birth to 18 months, maternal reporters recorded the initial, second, and third occurrences of seven target motor behaviors: palmar grasp, rolls from side to back, reaching when sitting, pincer grip, crawling, walking, and climbing stairs. Compliance was assessed as two valid entries per behavior: 97 % of maternal reporters met compliance criteria. Reliability was assessed by comparing diary entries with researcher assessments for three of the motor behaviors: palmar grasp, pincer grip and walking. A total of 81 % of maternal reporters met reliability criteria. For those three target behaviors, age of emergence was compared across data from the CUE diary method and researcher assessments. The CUE diary method was found to detect behaviors earlier and with greater sensitivity to individual differences. The CUE diary method is shown to be a reliable methodological tool for studying processes of change in human development.
  • Enfield, N. J., Brown, P., & De Ruiter, J. (2012). Epistemic dimensions of polar questions: Sentence-final particles in comparative perspective. In J. P. De Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 193-221). New York: Cambridge University Press.
  • Enfield, N. J. (2012). Diversity disregarded [Review of the book Games primates play: An undercover investigation of the evolution and economics of human relationships by Dario Maestripieri]. Science, 337, 1295-1296. doi:10.1126/science.1225365.
  • Enfield, N. J., & Sidnell, J. (2012). Collateral effects, agency, and systems of language use [Reply to commentators]. Current Anthropology, 53(3), 327-329.
  • Enfield, N. J. (2012). [Review of the book "Language, culture, and mind: Natural constructions and social kinds", by Paul Kockelman]. Language in Society, 41(5), 674-677. doi:10.1017/S004740451200070X.
  • Enfield, N. J. (2017). Language in the Mainland Southeast Asia Area. In R. Hickey (Ed.), The Cambridge Handbook of Areal Linguistics (pp. 677-702). Cambridge: Cambridge University Press. doi:10.1017/9781107279872.026.
  • Enfield, N. J. (2012). Language innateness [Letter to the Editor]. The Times Literary Supplement, October 26, 2012(5717), 6.
  • Enfield, N. J. (2012). The slow explosion of speech [Review of the book The origins of Grammar by James R. Hurford]. The Times Literary Supplement, March 30, 2012(5687), 11-12. Retrieved from http://www.the-tls.co.uk/tls/public/article1004404.ece.

    Abstract

    Book review of James R. Hurford THE ORIGINS OF GRAMMAR 791pp. Oxford University Press. ISBN 978 0 19 920787 9
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2012). Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia, 50, 2154-2164. doi:10.1016/j.neuropsychologia.2012.05.013.

    Abstract

    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesized that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested eighteen normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centered on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Ernestus, M. (2012). Segmental within-speaker variation. In A. C. Cohn, C. Fougeron, & M. K. Huffman (Eds.), The Oxford handbook of laboratory phonology (pp. 93-102). New York: Oxford University Press.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Escudero, P., Simon, E., & Mitterer, H. (2012). The perception of English front vowels by North Holland and Flemish listeners: Acoustic similarity predicts and explains cross-linguistic and L2 perception. Journal of Phonetics, 40, 280-288. doi:10.1016/j.wocn.2011.11.004.

    Abstract

    We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Estruch, S. B., Buzon, V., Carbo, L. R., Schorova, L., Luders, J., & Estebanez-Perpina, E. (2012). The oncoprotein BCL11A binds to Orphan Nuclear Receptor TLX and potentiates its transrepressive function. PLoS One, 7(6): e37963. doi:10.1371/journal.pone.0037963.

    Abstract

    Nuclear orphan receptor TLX (NR2E1) functions primarily as a transcriptional repressor and its pivotal role in brain development, glioblastoma, mental retardation and retinopathologies make it an attractive drug target. TLX is expressed in the neural stem cells (NSCs) of the subventricular zone and the hippocampus subgranular zone, regions with persistent neurogenesis in the adult brain, and functions as an essential regulator of NSCs maintenance and self-renewal. Little is known about the TLX social network of interactors and only few TLX coregulators are described. To identify and characterize novel TLX-binders and possible coregulators, we performed yeast-two-hybrid (Y2H) screens of a human adult brain cDNA library using different TLX constructs as baits. Our screens identified multiple clones of Atrophin-1 (ATN1), a previously described TLX interactor. In addition, we identified an interaction with the oncoprotein and zinc finger transcription factor BCL11A (CTIP1/Evi9), a key player in the hematopoietic system and in major blood-related malignancies. This interaction was validated by expression and coimmunoprecipitation in human cells. BCL11A potentiated the transrepressive function of TLX in an in vitro reporter gene assay. Our work suggests that BCL11A is a novel TLX coregulator that might be involved in TLX-dependent gene regulation in the brain.
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., & Scholte, H. S. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proceedings of the National Academy of Sciences of the United States of America, 109(52), 21504-21509. doi:10.1073/pnas.1207414110.
  • Falk, J. J., Zhang, Y., Scheutz, M., & Yu, C. (2021). Parents adaptively use anaphora during parent-child social interaction. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1472-1478). Vienna: Cognitive Science Society.

    Abstract

    Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Fawcett, C., & Liszkowski, U. (2012). Infants anticipate others’ social preferences. Infant and Child Development, 21, 239-249. doi:10.1002/icd.739.

    Abstract

    In the current eye-tracking study, we explored whether 12-month-old infants can predict others' social preferences. We showed infants scenes in which two characters alternately helped or hindered an agent in his goal of climbing a hill. In a control condition, the two characters moved up and down the hill in identical ways to the helper and hinderer but did not make contact with the agent; thus, they did not cause him to reach or not reach her or his goal. Following six alternating familiarization trials of helping and hindering interactions (help-hinder condition) or up and down interactions (up-down condition), infants were shown one test trial in which they could visually anticipate the agent approaching one of the two characters. As predicted, infants in the help-hinder condition made significantly more visual anticipations toward the helping than hindering character, suggesting that they predicted the agent to approach the helping character. In contrast, infants revealed no difference in visual anticipations between the up and down characters. The up-down condition served to control for low-level perceptual explanations of the results for the help-hinder condition. Thus, together the results reveal that 12-month-old infants make predictions about others' behaviour and social preferences from a third-party perspective.
  • Fawcett, C., & Liszkowski, U. (2012). Mimicry and play initiation in 18-month-old infants. Infant Behavior and Development, 35, 689-696. doi:10.1016/j.infbeh.2012.07.014.

    Abstract

    Across two experiments, we examined the relationship between 18-month-old infants’ mimicry and social behavior – particularly invitations to play with an adult play partner. In Experiment 1, we manipulated whether an adult mimicked the infant's play or not during an initial play phase. We found that infants who had been mimicked were subsequently more likely to invite the adult to join their play with a new toy. In addition, they reenacted marginally more steps from a social learning demonstration she gave. In Experiment 2, infants had the chance to spontaneously mimic the adult during the play phase. Complementing Experiment 1, those infants who spent more time mimicking the adult were more likely to invite her to play with a new toy. This effect was specific to play and not apparent in other communicative acts, such as directing the adult's attention to an event or requesting toys. Together, the results suggest that infants use mimicry as a tool to establish social connections with others and that mimicry has specific influences on social behaviors related to initiating subsequent joint interactions.
  • Fawcett, C., & Liszkowski, U. (2012). Observation and initiation of joint action in infants. Child Development, 83, 434-441. doi:10.1111/j.1467-8624.2011.01717.x.

    Abstract

    Infants imitate others’ individual actions, but do they also replicate others’ joint activities? To examine whether observing joint action influences infants’ initiation of joint action, forty-eight 18-month-old infants observed object demonstrations by 2 models acting together (joint action), 2 models acting individually (individual action), or 1 model acting alone (solitary action). Infants’ behavior was examined after they were given each object. Infants in the joint action condition attempted to initiate joint action more often than infants in the other conditions, yet they were equally likely to communicate for other reasons and to imitate the demonstrated object-directed actions. The findings suggest that infants learn to replicate others’ joint activity through observation, an important skill for cultural transmission of shared practices.
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • Fedden, S., & Boroditsky, L. (2012). Spatialization of time in Mian. Frontiers in Psychology, 3, 485. doi:10.3389/fpsyg.2012.00485.

    Abstract

    We examine representations of time among the Mianmin of Papua New Guinea. We begin by describing the patterns of spatial and temporal reference in Mian. Mian uses a system of spatial terms that derive from the orientation and direction of the Hak and Sek rivers and the surrounding landscape. We then report results from a temporal arrangement task administered to a group of Mian speakers. The results reveal evidence for a variety of temporal representations. Some participants arranged time with respect to their bodies (left to right or toward the body). Others arranged time as laid out on the landscape, roughly along the east/west axis (either east to west or west to east). This absolute pattern is consistent both with the axis of the motion of the sun and the orientation of the two rivers, which provides the basis for spatial reference in the Mian language. The results also suggest an increase in left-to-right temporal representations with increasing years of formal education (and the reverse pattern for absolute spatial representations for time). These results extend previous work on spatial representations for time to a new geographical region, physical environment, and linguistic and cultural system.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R. (2021). Learning second language speech perception in natural settings. PhD Thesis, Radboud University, Nijmegen.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Ferreri, A., Ponzoni, M., Govi, S., Pasini, E., Mappa, S., Vino, A., Facchetti, F., Vezzoli, P., Doglioni, C., Berti, E., & Dolcetti, R. (2012). Prevalence of chlamydial infection in a series of 108 primary cutaneous lymphomas. British Journal of Dermatology, 166(5), 1121-1123. doi:10.1111/j.1365-2133.2011.10704.x.
  • Fessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L. and 3 moreFessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L., Smolich, L., Sun, D. M., & Voracek, M. (2012). Testing a postulated case of intersexual selection in humans: The role of foot size in judgments of physical attractiveness and age. Evolution and Human Behavior, 33, 147-164. doi:10.1016/j.evolhumbehav.2011.08.002.

    Abstract

    The constituents of attractiveness differ across the sexes. Many relevant traits are dimorphic, suggesting that they are the product of intersexual selection. However, direction of causality is generally difficult to determine, as aesthetic criteria can as readily result from, as cause, dimorphism. Women have proportionately smaller feet than men. Prior work on the role of foot size in attractiveness suggests an asymmetry across the sexes, as small feet enhance female appearance, yet average, rather than large, feet are preferred on men. Previous investigations employed crude stimuli and limited samples. Here, we report on multiple cross-cultural studies designed to overcome these limitations. With the exception of one rural society, we find that small foot size is preferred when judging women, yet no equivalent preference applies to men. Similarly, consonant with the thesis that a preference for youth underlies intersexual selection acting on women, we document an inverse relationship between foot size and perceived age. Examination of preferences regarding, and inferences from, feet viewed in isolation suggests different roles for proportionality and absolute size in judgments of female and male bodies. Although the majority of these results bolster the conclusion that pedal dimorphism is the product of intersexual selection, the picture is complicated by the reversal of the usual preference for small female feet found in one rural society. While possibly explicable in terms of greater emphasis on female economic productivity relative to beauty, the latter finding underscores the importance of employing diverse samples when exploring postulated evolved aesthetic preferences.

    Additional information

    Fessler_2011_Suppl_material.pdf
  • Filippi, P., Charlton, B. D., & Fitch, W. T. (2012). Do Women Prefer More Complex Music around Ovulation? PLoS One, 7(4): e35626. doi:10.1371/journal.pone.0035626.

    Abstract

    The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P. (2012). Sintassi, Prosodia e Socialità: le Origini del Linguaggio Verbale. PhD Thesis, Università degli Studi di Palermo, Palermo.

    Abstract

    What is the key cognitive ability that makes humans unique among all the other animals? Our work aims at contributing to this research question adopting a comparative and philosophical approach to the origins of verbal language. In particular, we adopt three strands of analysis that are relevant in the context of comparative investigation on the the origins of verbal language: a) research on the evolutionary ‘homologies’, which provides information on the phylogenetic traits that humans and other primates share with their common ancestor; b) investigations on “analogous” traits, aimed at finding the evolutionary pressures that guided the emergence of the same biological traits that evolved independently in phylogenetically distant species; the ontogenetic development of the ability to produce and understand verbal language in human infants. Within this comparative approach, we focus on three key apsects that we addressed bridging recent empiric evidence on language processing with philosophical investigations on verbal language: (i) pattern processing as a biologocal precursor of syntax and algebraic rule acquisition, (ii) sound modulation as a guide to pattern comprehension in speech, animal vocalization and music, (iii) social strategies for mutual understanding, survival and group cohesion. We conclude emphasizing the interplay between these three sets of cognitive processes as a fundamental dimension grounding the emergence of the human ability for propositional language.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E., Hatchwell, E., Chand, A., Ockenden, N., Monaco, A. P., & Craig, I. W. (1995). Construction of two YAC contigs in human Xp11.23-p11.22, one encompassing the loci OATL1, GATA, TFE3, and SYP, the other linking DXS255 to DXS146. Genomics, 29(2), 496-502. doi:10.1006/geno.1995.9976.

    Abstract

    We have constructed two YAC contigs in the Xp11.23-p11.22 interval of the human X chromosome, a region that was previously poorly characterized. One contig, of at least 1.4 Mb, links the pseudogene OATL1 to the genes GATA1, TFE3, and SYP and also contains loci implicated in Wiskott-Aldrich syndrome and synovial sarcoma. A second contig, mapping proximal to the first, is estimated to be over 2.1 Mb and links the hypervariable locus DXS255 to DXS146, and also contains a chloride channel gene that is responsible for hereditary nephrolithiasis. We have used plasmid rescue, inverse PCR, and Alu-PCR to generate 20 novel markers from this region, 1 of which is polymorphic, and have positioned these relative to one another on the basis of YAC analysis. The order of previously known markers within our contigs, Xpter-OATL1-GATA-TFE3-SYP-DXS255146- Xcen, agrees with genomic pulsed-field maps of the region. In addition, we have constructed a rare-cutter restriction map for a 710-kb region of the DXS255-DXS146 contig and have identified three CPG islands. These contigs and new markers will provide a useful resource for more detailed analysis of Xp11.23-p11.22, a region implicated in several genetic diseases.
  • Fisher, S. E., Van Bakel, I., Lloyd, S. E., Pearce, S. H. S., Thakker, R. V., & Craig, I. W. (1995). Cloning and characterization of CLCN5, the human kidney chloride channel gene implicated in Dent disease (an X-linked hereditary nephrolithiasis). Genomics, 29, 598-606. doi:10.1006/geno.1995.9960.

    Abstract

    Dent disease, an X-linked familial renal tubular disorder, is a form of Fanconi syndrome associated with proteinuria, hypercalciuria, nephrocalcinosis, kidney stones, and eventual renal failure. We have previously used positional cloning to identify the 3' part of a novel kidney-specific gene (initially termed hClC-K2, but now referred to as CLCN5), which is deleted in patients from one pedigree segregating Dent disease. Mutations that disrupt this gene have been identified in other patients with this disorder. Here we describe the isolation and characterization of the complete open reading frame of the human CLCN5 gene, which is predicted to encode a protein of 746 amino acids, with significant homology to all known members of the ClC family of voltage-gated chloride channels. CLCN5 belongs to a distinct branch of this family, which also includes the recently identified genes CLCN3 and CLCN4. We have shown that the coding region of CLCN5 is organized into 12 exons, spanning 25-30 kb of genomic DNA, and have determined the sequence of each exon-intron boundary. The elucidation of the coding sequence and exon-intron organization of CLCN5 will both expedite the evaluation of structure/function relationships of these ion channels and facilitate the screening of other patients with renal tubular dysfunction for mutations at this locus.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, V. J. (2017). Dance as Embodied Analogy: Designing an Empirical Research Study. In M. Van Delft, J. Voets, Z. Gündüz, H. Koolen, & L. Wijers (Eds.), Danswetenschap in Nederland. Utrecht: Vereniging voor Dansonderzoek (VDO).
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (Eds.). (2012). Pattern perception and computational complexity [Special Issue]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598).
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (2012). Pattern perception and computational complexity: Introduction to the special issue. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598), 1925-1932. doi:10.1098/rstb.2012.0099.

    Abstract

    Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue.
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • Floyd, S. (2012). Book review of [Poeticas de vida en espacios de muerte: Ge´ nero, poder y estado en la contidianeidad warao [Poetics of life in spaces of death: Gender, power and the state in Warao everyday life] Charles L. Briggs. Quito, Ecuador: Abya Yala, 2008. 460 pp.]. American Anthropologist, 114, 543 -544. doi:10.1111/j.1548-1433.2012.01461_1.x.

    Abstract

    No abstract is available for this article
  • Floyd, S. (2017). Requesting as a means for negotiating distributed agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 67-78). Oxford: Oxford University Press.
  • Fonteijn, H. M., Modat, M., Clarkson, M. J., Barnes, J., Lehmann, M., Hobbs, N. Z., Scahill, R. I., Tabrizi, S. J., Ourselin, S., Fox, N. C., & Alexander, D. C. (2012). An event-based model for disease progression and its application in familial Alzheimer's disease and Huntington's disease. NeuroImage, 60, 1880-1889. doi:10.1016/j.neuroimage.2012.01.062.

    Abstract

    Understanding the progression of neurological diseases is vital for accurate and early diagnosis and treatment planning. We introduce a new characterization of disease progression, which describes the disease as a series of events, each comprising a significant change in patient state. We provide novel algorithms to learn the event ordering from heterogeneous measurements over a whole patient cohort and demonstrate using combined imaging and clinical data from familial-Alzheimer's and Huntington's disease cohorts. Results provide new detail in the progression pattern of these diseases, while confirming known features, and give unique insight into the variability of progression over the cohort. The key advantage of the new model and algorithms over previous progression models is that they do not require a priori division of the patients into clinical stages. The model and its formulation extend naturally to a wide range of other diseases and developmental processes and accommodate cross-sectional and longitudinal input data.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.

    Additional information

    supplementary information
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across
    languages (cognates) are easier to process than words that share only meaning. This facilitatory
    phenomenon is known as the cognate effect. Most previous studies have shown this effect visually,
    whereas the auditory modality as well as the interplay between type of similarity and modality
    remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out
    a lexical decision task in their second language, both visually and auditorily. Words had high or low
    phonological and orthographic similarity, fully crossed. We also included orthographically identical
    words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic
    similarity in the visual modality and phonological similarity in the auditory modality) leads to
    improved signal detection, whereas similarity across modalities hinders it. We provide support for
    the idea that perfect cognates are a special category within cognates. Results suggest a need for a
    conceptual and practical separation between types of similarity in cognate studies. The theoretical
    implication is that the representations of items are active in both modalities of the non‑target
    language during language processing, which needs to be incorporated to our current processing
    models.
  • Frances, C. (2021). Semantic richness, semantic context, and language learning. PhD Thesis, Universidad del País Vasco-Euskal Herriko Unibertsitatea, Donostia.

    Abstract

    As knowing a foreign language becomes a necessity in the modern world, a large portion of
    the population is faced with the challenge of learning a language in a classroom. This, in turn,
    presents a unique set of difficulties. Acquiring a language with limited and artificial exposure makes
    learning new information and vocabulary particularly difficult. The purpose of this thesis is to help us
    understand how we can compensate—at least partially—for these difficulties by presenting
    information in a way that aids learning. In particular, I focused on variables that affect semantic
    richness—meaning the amount and variability of information associated with a word. Some factors
    that affect semantic richness are intrinsic to the word and others pertain to that word’s relationship
    with other items and information. This latter group depends on the context around the to-be-
    learned items rather than the words themselves. These variables are easier to manipulate than
    intrinsic qualities, making them more accessible tools for teaching and understanding learning. I
    focused on two factors: emotionality of the surrounding semantic context and contextual diversity.
    Publication 1 (Frances, de Bruin, et al., 2020b) focused on content learning in a foreign
    language and whether the emotionality—positive or neutral—of the semantic context surrounding
    key information aided its learning. This built on prior research that showed a reduction in
    emotionality in a foreign language. Participants were taught information embedded in either
    positive or neutral semantic contexts in either their native or foreign language. When they were
    then tested on these embedded facts, participants’ performance decreased in the foreign language.
    But, more importantly, they remembered better the information from the positive than the neutral
    semantic contexts.
    In Publication 2 (Frances, de Bruin, et al., 2020a), I focused on how emotionality affected
    vocabulary learning. I taught participants the names of novel items described either in positive or
    neutral terms in either their native or foreign language. Participants were then asked to recall and
    recognize the object's name—when cued with its image. The effects of language varied with the
    difficulty of the task—appearing in recall but not recognition tasks. Most importantly, learning the
    words in a positive context improved learning, particularly of the association between the image of
    the object and its name.
    In Publication 3 (Frances, Martin, et al., 2020), I explored the effects of contextual
    diversity—namely, the number of texts a word appears in—on native and foreign language word
    learning. Participants read several texts that had novel pseudowords. The total number of
    encounters with the novel words was held constant, but they appeared in 1, 2, 4, or 8 texts in either
    their native or foreign language. Increasing contextual diversity—i.e., the number of texts a word
    appeared in—improved recall and recognition, as well as the ability to match the word with its
    meaning. Using a foreign language only affected performance when participants had to quickly
    identify the meaning of the word.
    Overall, I found that the tested contextual factors related to semantic richness—i.e.,
    emotionality of the semantic context and contextual diversity—can be manipulated to improve
    learning in a foreign language. Using positive emotionality not only improved learning in the foreign
    language, but it did so to the same extent as in the native language. On a theoretical level, this
    suggests that the reduction in emotionality in a foreign language is not ubiquitous and might relate
    to the way in which that language as learned.
    The third article shows an experimental manipulation of contextual diversity and how this
    can affect learning of a lexical item, even if the amount of information known about the item is kept
    constant. As in the case of emotionality, the effects of contextual diversity were also the same
    between languages. Although deducing words from context is dependent on vocabulary size, this
    does not seem to hinder the benefits of contextual diversity in the foreign language.
    Finally, as a whole, the articles contained in this compendium provide evidence that some
    aspects of semantic richness can be manipulated contextually to improve learning and memory. In
    addition, the effects of these factors seem to be independent of language status—meaning, native
    or foreign—when learning new content. This suggests that learning in a foreign and a native
    language is not as different as I initially hypothesized, allowing us to take advantage of native
    language learning tools in the foreign language, as well.
  • Franceschini, R. (2012). Wolfgang Klein und die LiLi [Laudatio]. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168), 5-7.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Franken, M. K., Huizinga, C. S. M., & Schiller, N. O. (2012). De grafemische buffer: Aspecten van een spellingstoornis. Stem- Spraak- en Taalpathologie, 17(3), 17-36.

    Abstract

    A spelling disorder that received much attention recently is the so-called graphemic buffer impairment. Caramazza et al. (1987) presented the first systematic case study of a patient with this disorder. Miceli & Capasso (2006) provide an extensive overview of the relevant literature. This article adds to the literature by describing a Dutch case, i.e. patient BM. We demonstrate how specific features of Dutch and Dutch orthography interact with the graphemic buffer impairment. In addition, we paid special attention to the influence of grapheme position on the patient’s spelling accuracy. For this we used, in contrast with most of the previous literature, the proportional accountability method described in Machtynger & Shallice (2009). We show that by using this method the underlying error distribution can be more optimally captured than with classical methods. The result of this analysis replicates two distributions that have been previously reported in the literature. Finally, attention will be paid to the role of phonology in the described disorder.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • French, C. A., Jin, X., Campbell, T. G., Gerfen, E., Groszer, M., Fisher, S. E., & Costa, R. M. (2012). An aetiological Foxp2 mutation causes aberrant striatal activity and alters plasticity during skill learning. Molecular Psychiatry, 17, 1077-1085. doi:10.1038/mp.2011.105.

    Abstract

    Mutations in the human FOXP2 gene cause impaired speech development and linguistic deficits, which have been best characterised in a large pedigree called the KE family. The encoded protein is highly conserved in many vertebrates and is expressed in homologous brain regions required for sensorimotor integration and motor-skill learning, in particular corticostriatal circuits. Independent studies in multiple species suggest that the striatum is a key site of FOXP2 action. Here, we used in vivo recordings in awake-behaving mice to investigate the effects of the KE-family mutation on the function of striatal circuits during motor-skill learning. We uncovered abnormally high ongoing striatal activity in mice carrying an identical mutation to that of the KE family. Furthermore, there were dramatic alterations in striatal plasticity during the acquisition of a motor skill, with most neurons in mutants showing negative modulation of firing rate, starkly contrasting with the predominantly positive modulation seen in control animals. We also observed striking changes in the temporal coordination of striatal firing during motor-skill learning in mutants. Our results indicate that FOXP2 is critical for the function of striatal circuits in vivo, which are important not only for speech but also for other striatal-dependent skills.

    Additional information

    French_2011_Supplementary_Info.pdf
  • Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S. Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S., Weis, S., Wilson, C., Xu, T., Zerbi, V., Eickoff, S. B., Margulies, D., Mars, R., & Thiebaut de Schotten, M. (2021). Imaging evolution of the primate brain: The next frontier? NeuroImage, 228: 117685. doi:10.1016/j.neuroimage.2020.117685.

    Abstract

    Evolution, as we currently understand it, strikes a delicate balance between animals' ancestral history and adaptations to their current niche. Similarities between species are generally considered inherited from a common ancestor whereas observed differences are considered as more recent evolution. Hence comparing species can provide insights into the evolutionary history. Comparative neuroimaging has recently emerged as a novel subdiscipline, which uses magnetic resonance imaging (MRI) to identify similarities and differences in brain structure and function across species. Whereas invasive histological and molecular techniques are superior in spatial resolution, they are laborious, post-mortem, and oftentimes limited to specific species. Neuroimaging, by comparison, has the advantages of being applicable across species and allows for fast, whole-brain, repeatable, and multi-modal measurements of the structure and function in living brains and post-mortem tissue. In this review, we summarise the current state of the art in comparative anatomy and function of the brain and gather together the main scientific questions to be explored in the future of the fascinating new field of brain evolution derived from comparative neuroimaging.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Casillas, M. (2021). Investigating statistical learning of nonadjacent dependencies: Running statistical learning tasks in non-WEIRD populations. In SAGE Research Methods Cases. doi:10.4135/9781529759181.

    Abstract

    Language acquisition is complex. However, one thing that has been suggested to help learning is the way that information is distributed throughout language; co-occurrences among particular items (e.g., syllables and words) have been shown to help learners discover the words that a language contains and figure out how those words are used. Humans’ ability to draw on this information—“statistical learning”—has been demonstrated across a broad range of studies. However, evidence from non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies is critically lacking, which limits theorizing on the universality of this skill. We extended work on statistical language learning to a new, non-WEIRD linguistic population: speakers of Yélî Dnye, who live on a remote island off mainland Papua New Guinea (Rossel Island). We performed a replication of an existing statistical learning study, training adults on an artificial language with statistically defined words, then examining what they had learnt using a two-alternative forced-choice test. Crucially, we implemented several key amendments to the original study to ensure the replication was suitable for remote field-site testing with speakers of Yélî Dnye. We made critical changes to the stimuli and materials (to test speakers of Yélî Dnye, rather than English), the instructions (we re-worked these significantly, and added practice tasks to optimize participants’ understanding), and the study format (shifting from a lab-based to a portable tablet-based setup). We discuss the requirement for acute sensitivity to linguistic, cultural, and environmental factors when adapting studies to test new populations.

  • Frost, R. L. A., Gaskell, G., Warker, J., Guest, J., Snowdon, R., & Stackhouse, A. (2012). Sleep Facilitates Acquisition of Implicit Phonotactic Constraints in Speech Production. Journal of sleep research, 21(s1), 249-249. doi:10.1111/j.1365-2869.2012.01044.x.

    Abstract

    Sleep plays an important role in neural reorganisation which underpins memory consolidation. The gradual replacement of
    hippocampal binding of new memories with intracortical connections helps to link new memories to existing knowledge. This process appears to be faster for memories which fit more easily into existing schemas. Here we seek to investigate whether this more rapid consolidation of schema-conformant information is facilitated by
    sleep, and the neural basis of this process.
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • De la Fuente, J., Santiago, J., Roma, A., Dumitrache, C., & Casasanto, D. (2012). Facing the past: cognitive flexibility in the front-back mapping of time [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Poster Presentations, 13(Suppl. 1), S58.

    Abstract

    In many languages the future is in front and the past behind, but in some cultures (like Aymara) the past is in front. Is it possible to find this mapping as an alternative conceptualization of time in other cultures? If so, what are the factors that affect its choice out of the set of available alternatives? In a paper and pencil task, participants placed future or past events either in front or behind a character (a schematic head viewed from above). A sample of 24 Islamic participants (whose language also places the future in front and the past behind) tended to locate the past event in the front box more often than Spanish participants. This result might be due to the greater cultural value assigned to tradition in Islamic culture. The same pattern was found in a sample of Spanish elders (N = 58), what may support that conclusion. Alternatively, the crucial factor may be the amount of attention paid to the past. In a final study, young Spanish adults (N = 200) who had just answered a set of questions about their past showed the past-in-front pattern, whereas questions about their future exacerbated the future-in-front pattern. Thus, the attentional explanation was supported: attended events are mapped to front space in agreement with the experiential connection between attending and seeing. When attention is paid to the past, it tends to occupy the front location in spite of available alternative mappings in the language-culture.
  • Furman, R. (2012). Caused motion events in Turkish: Verbal and gestural representation in adults and children. PhD Thesis, Radboud University Nijmegen/LOT.

    Abstract

    Caused motion events (e.g. a boy pulls a box into a room) are basic events where an Agent (the boy) performs an Action (pulling) that causes a Figure (box) to move in a spatial Path (into) to a Goal (the room). These semantic elements are mapped onto lexical and syntactic structures differently across languages This dissertation investigates the encoding of caused motion events in Turkish, and the development of this encoding in speech and gesture. First, a linguistic analysis shows that Turkish does not fully fit into the expected typological patterns, and that the encoding of caused motion is determined by the fine-grained lexical semantics of a verb as well as the syntactic construction the verb is integrated into. A grammaticality judgment study conducted with adult Turkish speakers further establishes the fundamentals of the encoding patterns. An event description study compares adults’ verbal and gestural representations of caused motion to those of children aged 3 to 5. The findings indicate that although language-specificity is evident in children’s speech and gestures, the development of adult patterns takes time and occurs after the age of 5. A final study investigates a longitudinal video corpus of the spontaneous speech of Turkish-speaking children aged 1 to 3, and finds that language-specificity is evident from the start in both children’s speech and gesture. Apart from contributing to the literature on the development of Turkish, this dissertation furthers our understanding of the interaction between language-specificity and the multimodal expression of semantic information in event descriptions.
  • Fusaroli, R., Tylén, K., Garly, K., Steensig, J., Christiansen, M. H., & Dingemanse, M. (2017). Measures and mechanisms of common ground: Backchannels, conversational repair, and interactive alignment in free and task-oriented social interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2055-2060). Austin, TX: Cognitive Science Society.

    Abstract

    A crucial aspect of everyday conversational interactions is our ability to establish and maintain common ground. Understanding the relevant mechanisms involved in such social coordination remains an important challenge for cognitive science. While common ground is often discussed in very general terms, different contexts of interaction are likely to afford different coordination mechanisms. In this paper, we investigate the presence and relation of three mechanisms of social coordination – backchannels, interactive alignment and conversational repair – across free and task-oriented conversations. We find significant differences: task-oriented conversations involve higher presence of repair – restricted offers in particular – and backchannel, as well as a reduced level of lexical and syntactic alignment. We find that restricted repair is associated with lexical alignment and open repair with backchannels. Our findings highlight the need to explicitly assess several mechanisms at once and to investigate diverse activities to understand their role and relations.
  • Gaby, A. (2012). The Thaayorre lexicon of putting and taking. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 233-252). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics and relative distributions of verbs describing putting and taking events in Kuuk Thaayorre, a Pama-Nyungan language of Cape York (Australia). Thaayorre put/take verbs can be subcategorised according to whether they may combine with an NP encoding a goal, an NP encoding a source, or both. Goal NPs are far more frequent in natural discourse: initial analysis shows 85% of goal-oriented verb tokens to be accompanied by a goal NP, while only 31% of source-oriented verb tokens were accompanied by a source. This finding adds weight to Ikegami’s (1987) assertion of the conceptual primacy of goals over sources, reflected in a cross-linguistic dissymmetry whereby goal-marking is less marked and more widely used than source-marking.
  • Galke, L., Franke, B., Zielke, T., & Scherp, A. (2021). Lifelong learning of graph neural networks for open-world node classification. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). Piscataway, NJ: IEEE. doi:10.1109/IJCNN52387.2021.9533412.

    Abstract

    Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification. However, real-world graphs are often evolving over time and even new classes may arise. We model these challenges as an instance of lifelong learning, in which a learner faces a sequence of tasks and may take over knowledge acquired in past tasks. Such knowledge may be stored explicitly as historic data or implicitly within model parameters. In this work, we systematically analyze the influence of implicit and explicit knowledge. Therefore, we present an incremental training method for lifelong learning on graphs and introduce a new measure based on k-neighborhood time differences to address variances in the historic data. We apply our training method to five representative GNN architectures and evaluate them on three new lifelong node classification datasets. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over the complete history of the graph data. Furthermore, our experiments confirm that implicit knowledge becomes more important when fewer explicit knowledge is available.
  • Galke, L., Seidlmayer, E., Lüdemann, G., Langnickel, L., Melnychuk, T., Förstner, K. U., Tochtermann, K., & Schultz, C. (2021). COVID-19++: A citation-aware Covid-19 dataset for the analysis of research dynamics. In Y. Chen, H. Ludwig, Y. Tu, U. Fayyad, X. Zhu, X. Hu, S. Byna, X. Liu, J. Zhang, S. Pan, V. Papalexakis, J. Wang, A. Cuzzocrea, & C. Ordonez (Eds.), Proceedings of the 2021 IEEE International Conference on Big Data (pp. 4350-4355). Piscataway, NJ: IEEE.

    Abstract

    COVID-19 research datasets are crucial for analyzing research dynamics. Most collections of COVID-19 research items do not to include cited works and do not have annotations
    from a controlled vocabulary. Starting with ZB MED KE data on COVID-19, which comprises CORD-19, we assemble a new dataset that includes cited work and MeSH annotations for all records. Furthermore, we conduct experiments on the analysis of research dynamics, in which we investigate predicting links in a co-annotation graph created on the basis of the new dataset. Surprisingly, we find that simple heuristic methods are better at
    predicting future links than more sophisticated approaches such as graph neural networks.
  • Galke, L., Mai, F., Schelten, A., Brunch, D., & Scherp, A. (2017). Using titles vs. full-text as source for automated semantic document annotation. In O. Corcho, K. Janowicz, G. Rizz, I. Tiddi, & D. Garijo (Eds.), Proceedings of the 9th International Conference on Knowledge Capture (K-CAP 2017). New York: ACM.

    Abstract

    We conduct the first systematic comparison of automated semantic
    annotation based on either the full-text or only on the title metadata
    of documents. Apart from the prominent text classification baselines
    kNN and SVM, we also compare recent techniques of Learning
    to Rank and neural networks and revisit the traditional methods
    logistic regression, Rocchio, and Naive Bayes. Across three of our
    four datasets, the performance of the classifications using only titles
    reaches over 90% of the quality compared to the performance when
    using the full-text.
  • Galke, L., Saleh, A., & Scherp, A. (2017). Word embeddings for practical information retrieval. In M. Eibl, & M. Gaedke (Eds.), INFORMATIK 2017 (pp. 2155-2167). Bonn: Gesellschaft für Informatik. doi:10.18420/in2017_215.

    Abstract

    We assess the suitability of word embeddings for practical information retrieval scenarios. Thus, we assume that users issue ad-hoc short queries where we return the first twenty retrieved documents after applying a boolean matching operation between the query and the documents. We compare the performance of several techniques that leverage word embeddings in the retrieval models to compute the similarity between the query and the documents, namely word centroid similarity, paragraph vectors, Word Mover’s distance, as well as our novel inverse document frequency (IDF) re-weighted word centroid similarity. We evaluate the performance using the ranking metrics mean average precision, mean reciprocal rank, and normalized discounted cumulative gain. Additionally, we inspect the retrieval models’ sensitivity to document length by using either only the title or the full-text of the documents for the retrieval task. We conclude that word centroid similarity is the best competitor to state-of-the-art retrieval models. It can be further improved by re-weighting the word frequencies with IDF before aggregating the respective word vectors of the embedding. The proposed cosine similarity of IDF re-weighted word vectors is competitive to the TF-IDF baseline and even outperforms it in case of the news domain with a relative percentage of 15%.
  • Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2012). From gr8 to great: Lexical access to SMS shortcuts. Frontiers in Psychology, 3, 150. doi:10.3389/fpsyg.2012.00150.

    Abstract

    Many contemporary texts include shortcuts, such as cu or phones4u. The aim of this study was to investigate how the meanings of shortcuts are retrieved. A primed lexical decision paradigm was used with shortcuts and the corresponding words as primes. The target word was associatively related to the meaning of the whole prime (cu/see you – goodbye), to a component of the prime (cu/see you – look), or unrelated to the prime. In Experiment 1, primes were presented for 57 ms. For both word and shortcut primes, responses were faster to targets preceded by whole-related than by unrelated primes. No priming from component-related primes was found. In Experiment 2, the prime duration was 1000 ms. The priming effect seen in Experiment 1 was replicated. Additionally, there was priming from component-related word primes, but not from component-related shortcut primes. These results indicate that the meanings of shortcuts can be retrieved without translating them first into corresponding words.
  • Gao, X., Levinthal, B. R., & Stine-Morrow, E. A. L. (2012). The effects of ageing and visual noise on conceptual integration during sentence reading. Quarterly journal of experimental psychology, 65(9), 1833-1847. doi:10.1080/17470218.2012.674146.

    Abstract

    The effortfulness hypothesis implies that difficulty in decoding the surface form, as in the case of age-related sensory limitations or background noise, consumes the attentional resources that are then unavailable for semantic integration in language comprehension. Because ageing is associated with sensory declines, degrading of the surface form by a noisy background can pose an extra challenge for older adults. In two experiments, this hypothesis was tested in a self-paced moving window paradigm in which younger and older readers' online allocation of attentional resources to surface decoding and semantic integration was measured as they read sentences embedded in varying levels of visual noise. When visual noise was moderate (Experiment 1), resource allocation among young adults was unaffected but older adults allocated more resources to decode the surface form at the cost of resources that would otherwise be available for semantic processing; when visual noise was relatively intense (Experiment 2), both younger and older participants allocated more attention to the surface form and less attention to semantic processing. The decrease in attentional allocation to semantic integration resulted in reduced recall of core ideas in both experiments, suggesting that a less organized semantic representation was constructed in noise. The greater vulnerability of older adults at relatively low levels of noise is consistent with the effortfulness hypothesis.
  • Garcia, R., Garrido Rodriguez, G., & Kidd, E. (2021). Developmental effects in the online use of morphosyntactic cues in sentence processing: Evidence from Tagalog. Cognition, 216: 104859. doi:10.1016/j.cognition.2021.104859.

    Abstract

    Children must necessarily process their input in order to learn it, yet the architecture of the developing parsing system and how it interfaces with acquisition is unclear. In the current paper we report experimental and corpus data investigating adult and children's use of morphosyntactic cues for making incremental online predictions of thematic roles in Tagalog, a verb-initial symmetrical voice language of the Philippines. In Study 1, Tagalog-speaking adults completed a visual world eye-tracking experiment in which they viewed pictures of causative actions that were described by transitive sentences manipulated for voice and word order. The pattern of results showed that adults process agent and patient voice differently, predicting the upcoming noun in the patient voice but not in the agent voice, consistent with the observation of a patient voice preference in adult sentence production. In Study 2, our analysis of a corpus of child-directed speech showed that children heard more patient voice- than agent voice-marked verbs. In Study 3, 5-, 7-, and 9-year-old children completed a similar eye-tracking task as used in Study 1. The overall pattern of results suggested that, like the adults in Study 1, children process agent and patient voice differently in a manner that reflects the input distributions, with children developing towards the adult state across early childhood. The results are most consistent with theoretical accounts that identify a key role for input distributions in acquisition and language processing

    Additional information

    1-s2.0-S001002772100278X-mmc1.docx
  • Gaspard III, J. C., Bauer, G. B., Mann, D. A., Boerner, K., Denum, L., Frances, C., & Reep, R. L. (2017). Detection of hydrodynamic stimuli by the postcranial body of Florida manatees (Trichechus manatus latirostris) A Neuroethology, sensory, neural, and behavioral physiology. Journal of Comparative Physiology, 203, 111-120. doi:10.1007/s00359-016-1142-8.

    Abstract

    Manatees live in shallow, frequently turbid
    waters. The sensory means by which they navigate in these
    conditions are unknown. Poor visual acuity, lack of echo-
    location, and modest chemosensation suggest that other
    modalities play an important role. Rich innervation of sen-
    sory hairs that cover the entire body and enlarged soma-
    tosensory areas of the brain suggest that tactile senses are
    good candidates. Previous tests of detection of underwater
    vibratory stimuli indicated that they use passive movement
    of the hairs to detect particle displacements in the vicinity
    of a micron or less for frequencies from 10 to 150 Hz. In
    the current study, hydrodynamic stimuli were created by
    a sinusoidally oscillating sphere that generated a dipole
    field at frequencies from 5 to 150 Hz. Go/no-go tests of
    manatee postcranial mechanoreception of hydrodynamic
    stimuli indicated excellent sensitivity but about an order of
    magnitude less than the facial region. When the vibrissae
    were trimmed, detection thresholds were elevated, suggest-
    ing that the vibrissae were an important means by which
    detection occurred. Manatees were also highly accurate in two-choice directional discrimination: greater than 90%
    correct at all frequencies tested. We hypothesize that mana-
    tees utilize vibrissae as a three-dimensional array to detect
    and localize low-frequency hydrodynamic stimuli
  • Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P. Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P., Huijser, D. C., Sandström, M. S., Herholz, P., Nastase, S. A., Badhwar, A., Dumas, G., Schwab, S., Moia, S., Dayan, M., Bassil, Y., Brooks, P. P., Mancini, M., Shine, J. M., O’Connor, D., Xie, X., Poggiali, D., Friedrich, P., Heinsfeld, A. S., Riedl, L., Toro, R., Caballero-Gaudes, C., Eklund, A., Garner, K. G., Nolan, C. R., Demeter, D. V., Barrios, F. A., Merchant, J. S., McDevitt, E. A., Oostenveld, R., Craddock, R. C., Rokem, A., Doyle, A., Ghosh, S. S., Nikolaidis, A., Stanley, O. W., Uruñuela, E., Anousheh, N., Arnatkeviciute, A., Auzias, G., Bachar, D., Bannier, E., Basanisi, R., Basavaraj, A., Bedini, M., Bellec, P., Benn, R. A., Berluti, K., Bollmann, S., Bollmann, S., Bradley, C., Brown, J., Buchweitz, A., Callahan, P., Chan, M. Y., Chandio, B. Q., Cheng, T., Chopra, S., Chung, A. W., Close, T. G., Combrisson, E., Cona, G., Constable, R. T., Cury, C., Dadi, K., Damasceno, P. F., Das, S., De Vico Fallani, F., DeStasio, K., Dickie, E. W., Dorfschmidt, L., Duff, E. P., DuPre, E., Dziura, S., Esper, N. B., Esteban, O., Fadnavis, S., Flandin, G., Flannery, J. E., Flournoy, J., Forkel, S. J., Franco, A. R., Ganesan, S., Gao, S., García Alanis, J. C., Garyfallidis, E., Glatard, T., Glerean, E., Gonzalez-Castillo, J., Gould van Praag, C. D., Greene, A. S., Gupta, G., Hahn, C. A., Halchenko, Y. O., Handwerker, D., Hartmann, T. S., Hayot-Sasson, V., Heunis, S., Hoffstaedter, F., Hohmann, D. M., Horien, C., Ioanas, H.-I., Iordan, A., Jiang, C., Joseph, M., Kai, J., Karakuzu, A., Kennedy, D. N., Keshavan, A., Khan, A. R., Kiar, G., Klink, P. C., Koppelmans, V., Koudoro, S., Laird, A. R., Langs, G., Laws, M., Licandro, R., Liew, S.-L., Lipic, T., Litinas, K., Lurie, D. J., Lussier, D., Madan, C. R., Mais, L.-T., Mansour L, S., Manzano-Patron, J., Maoutsa, D., Marcon, M., Margulies, D. S., Marinato, G., Marinazzo, D., Markiewicz, C. J., Maumet, C., Meneguzzi, F., Meunier, D., Milham, M. P., Mills, K. L., Momi, D., Moreau, C. A., Motala, A., Moxon-Emre, I., Nichols, T. E., Nielson, D. M., Nilsonne, G., Novello, L., O’Brien, C., Olafson, E., Oliver, L. D., Onofrey, J. A., Orchard, E. R., Oudyk, K., Park, P. J., Parsapoor, M., Pasquini, L., Peltier, S., Pernet, C. R., Pienaar, R., Pinheiro-Chagas, P., Poline, J.-B., Qiu, A., Quendera, T., Rice, L. C., Rocha-Hidalgo, J., Rutherford, S., Scharinger, M., Scheinost, D., Shariq, D., Shaw, T. B., Siless, V., Simmonite, M., Sirmpilatze, N., Spence, H., Sprenger, J., Stajduhar, A., Szinte, M., Takerkart, S., Tam, A., Tejavibulya, L., Thiebaut de Schotten, M., Thome, I., Tomaz da Silva, L., Traut, N., Uddin, L. Q., Vallesi, A., VanMeter, J. W., Vijayakumar, N., di Oleggio Castello, M. V., Vohryzek, J., Vukojević, J., Whitaker, K. J., Whitmore, L., Wideman, S., Witt, S. T., Xie, H., Xu, T., Yan, C.-G., Yeh, F.-C., Yeo, B. T., & Zuo, X.-N. (2021). Brainhack: Developing a culture of open, inclusive, community-driven neuroscience. Neuron, 109(11), 1769-1775. doi:10.1016/j.neuron.2021.04.001.

    Abstract

    Social factors play a crucial role in the advancement of science. New findings are discussed and theories emerge through social interactions, which usually take place within local research groups and at academic events such as conferences, seminars, or workshops. This system tends to amplify the voices of a select subset of the community—especially more established researchers—thus limiting opportunities for the larger community to contribute and connect. Brainhack (https://brainhack.org/) events (or Brainhacks for short) complement these formats in neuroscience with decentralized 2- to 5-day gatherings, in which participants from diverse backgrounds and career stages collaborate and learn from each other in an informal setting. The Brainhack format was introduced in a previous publication (Cameron Craddock et al., 2016; Figures 1A and 1B). It is inspired by the hackathon model (see glossary in Table 1), which originated in software development and has gained traction in science as a way to bring people together for collaborative work and educational courses. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists. Brainhacks additionally replace the sometimes-competitive context of traditional hackathons with a purely collaborative one and also feature informal dissemination of ongoing research through unconferences.

    Additional information

    supplementary information
  • Gebre, B. G., & Wittenburg, P. (2012). Adaptive automatic gesture stroke detection. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 458-461).

    Abstract

    Print Friendly XML Gebre, Binyam Gebrekidan, Max Planck Institute for Psycholinguistics, The Netherlands, binyamgebrekidan.gebre [at] mpi.nl Wittenburg, Peter, Max Planck Institute for Psycholinguistics, The Netherlands, peter.wittenburg [at] mpi.nl Introduction Many gesture and sign language researchers manually annotate video recordings to systematically categorize, analyze and explain their observations. The number and kinds of annotations are so diverse and unpredictable that any attempt at developing non-adaptive automatic annotation systems is usually less effective. The trend in the literature has been to develop models that work for average users and for average scenarios. This approach has three main disadvantages. First, it is impossible to know beforehand all the patterns that could be of interest to all researchers. Second, it is practically impossible to find enough training examples for all patterns. Third, it is currently impossible to learn a model that is robustly applicable across all video quality-recording variations.
  • Gebre, B. G., Wittenburg, P., & Lenkiewicz, P. (2012). Towards automatic gesture stroke detection. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 231-235). European Language Resources Association.

    Abstract

    Automatic annotation of gesture strokes is important for many gesture and sign language researchers. The unpredictable diversity of human gestures and video recording conditions require that we adopt a more adaptive case-by-case annotation model. In this paper, we present a work-in progress annotation model that allows a user to a) track hands/face b) extract features c) distinguish strokes from non-strokes. The hands/face tracking is done with color matching algorithms and is initialized by the user. The initialization process is supported with immediate visual feedback. Sliders are also provided to support a user-friendly adjustment of skin color ranges. After successful initialization, features related to positions, orientations and speeds of tracked hands/face are extracted using unique identifiable features (corners) from a window of frames and are used for training a learning algorithm. Our preliminary results for stroke detection under non-ideal video conditions are promising and show the potential applicability of our methodology.
  • Geipel, I., Lattenkamp, E. Z., Dixon, M. M., Wiegrebe, L., & Page, R. A. (2021). Hearing sensitivity: An underlying mechanism for niche differentiation in gleaning bats. Proceedings of the National Academy of Sciences of the United States of America, 118: e2024943118. doi:10.1073/pnas.2024943118.

    Abstract

    Tropical ecosystems are known for high species diversity. Adaptations permitting niche differentiation enable species to coexist. Historically, research focused primarily on morphological and behavioral adaptations for foraging, roosting, and other basic ecological factors. Another important factor, however, is differences in sensory capabilities. So far, studies mainly have focused on the output of behavioral strategies of predators and their prey preference. Understanding the coexistence of different foraging strategies, however, requires understanding underlying cognitive and neural mechanisms. In this study, we investigate hearing in bats and how it shapes bat species coexistence. We present the hearing thresholds and echolocation calls of 12 different gleaning bats from the ecologically diverse Phyllostomid family. We measured their auditory brainstem responses to assess their hearing sensitivity. The audiograms of these species had similar overall shapes but differed substantially for frequencies below 9 kHz and in the frequency range of their echolocation calls. Our results suggest that differences among bats in hearing abilities contribute to the diversity in foraging strategies of gleaning bats. We argue that differences in auditory sensitivity could be important mechanisms shaping diversity in sensory niches and coexistence of species.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C. and 29 moreGialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C., Olson, R. K., Smith, S. D., Pennington, B. F., Vaessen, A., Maurer, U., Lyytinen, H., Peyrard-Janvid, M., Leppänen, P. H. T., Brandeis, D., Bonte, M., Stein, J. F., Talcott, J. B., Fauchereau, F., Wilcke, A., Kirsten, H., Müller, B., Francks, C., Bourgeron, T., Monaco, A. P., Ramus, F., Landerl, K., Kere, J., Scerri, T. S., Paracchini, S., Fisher, S. E., Schumacher, J., Nöthen, M. M., Müller-Myhsok, B., & Schulte-Körne, G. (2021). Genome-wide association study reveals new insights into the heritability and genetic correlates of developmental dyslexia. Molecular Psychiatry, 26, 3004-3017. doi:10.1038/s41380-020-00898-x.

    Abstract

    Developmental dyslexia (DD) is a learning disorder affecting the ability to read, with a heritability of 40–60%. A notable part of this heritability remains unexplained, and large genetic studies are warranted to identify new susceptibility genes and clarify the genetic bases of dyslexia. We carried out a genome-wide association study (GWAS) on 2274 dyslexia cases and 6272 controls, testing associations at the single variant, gene, and pathway level, and estimating heritability using single-nucleotide polymorphism (SNP) data. We also calculated polygenic scores (PGSs) based on large-scale GWAS data for different neuropsychiatric disorders and cortical brain measures, educational attainment, and fluid intelligence, testing them for association with dyslexia status in our sample. We observed statistically significant (p  < 2.8 × 10−6) enrichment of associations at the gene level, for LOC388780 (20p13; uncharacterized gene), and for VEPH1 (3q25), a gene implicated in brain development. We estimated an SNP-based heritability of 20–25% for DD, and observed significant associations of dyslexia risk with PGSs for attention deficit hyperactivity disorder (at pT = 0.05 in the training GWAS: OR = 1.23[1.16; 1.30] per standard deviation increase; p  = 8 × 10−13), bipolar disorder (1.53[1.44; 1.63]; p = 1 × 10−43), schizophrenia (1.36[1.28; 1.45]; p = 4 × 10−22), psychiatric cross-disorder susceptibility (1.23[1.16; 1.30]; p = 3 × 10−12), cortical thickness of the transverse temporal gyrus (0.90[0.86; 0.96]; p = 5 × 10−4), educational attainment (0.86[0.82; 0.91]; p = 2 × 10−7), and intelligence (0.72[0.68; 0.76]; p = 9 × 10−29). This study suggests an important contribution of common genetic variants to dyslexia risk, and novel genomic overlaps with psychiatric conditions like bipolar disorder, schizophrenia, and cross-disorder susceptibility. Moreover, it revealed the presence of shared genetic foundations with a neural correlate previously implicated in dyslexia by neuroimaging evidence.
  • Gialluisi, A., Pippucci, T., Anikster, Y., Ozbek, U., Medlej-Hashim, M., Mégarbané, A., & Romeo, G. (2012). Estimating the allele frequency of autosomal recessive disorders through mutational records and consanguinity: The homozygosity index (HI). Annals of Human Genetics, 76, 159-167. doi:10.1111/j.1469-1809.2011.00693.x.

    Abstract

    In principle mutational records make it possible to estimate frequencies of disease alleles (q) for autosomal recessive disorders using a novel approach based on the calculation of the Homozygosity Index (HI), i.e., the proportion of homozygous patients, which is complementary to the proportion of compound heterozygous patients P(CH). In other words, the rarer the disorder, the higher will be the HI and the lower will be the P(CH). To test this hypothesis we used mutational records of individuals affected with Familial Mediterranean Fever (FMF) and Phenylketonuria (PKU), born to either consanguineous or apparently unrelated parents from six population samples of the Mediterranean region. Despite the unavailability of precise values of the inbreeding coefficient for the general population, which are needed in the case of apparently unrelated parents, our estimates of q are very similar to those of previous descriptive epidemiological studies. Finally, we inferred from simulation studies that the minimum sample size needed to use this approach is 25 patients either with unrelated or first cousin parents. These results show that the HI can be used to produce a ranking order of allele frequencies of autosomal recessive disorders, especially in populations with high rates of consanguineous marriages.

Share this page