Publications

Displaying 201 - 300 of 713
  • Ernestus, M., & Baayen, R. H. (2011). Corpora and exemplars in phonology. In J. A. Goldsmith, J. Riggle, & A. C. Yu (Eds.), The handbook of phonological theory (2nd ed.) (pp. 374-400). Oxford: Wiley-Blackwell.
  • Ernestus, M., & Giezenaar, G. (2015). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Klassiek vakwerk II: Achtergronden van het NT2-onderwijs (pp. 143-155). Amsterdam: Boom.
  • Ernestus, M. (2011). Gradience and categoricality in phonological theory. In M. Van Oostendorp, C. J. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology (pp. 2115-2136). Wiley-Blackwell.
  • Esling, J. H., Benner, A., & Moisik, S. R. (2015). Laryngeal articulatory function and speech origins. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 2-7). Glasgow: ICPhS.

    Abstract

    The larynx is the essential articulatory mechanism that primes the vocal tract. Far from being only a glottal source of voicing, the complex laryngeal mechanism entrains the ontogenetic acquisition of speech and, through coarticulatory coupling, guides the production of oral sounds in the infant vocal tract. As such, it is not possible to speculate as to the origins of the speaking modality in humans without considering the fundamental role played by the laryngeal articulatory mechanism. The Laryngeal Articulator Model, which divides the vocal tract into a laryngeal component and an oral component, serves as a basis for describing early infant speech and for positing how speech sounds evolving in various hominids may be related phonetically. To this end, we offer some suggestions for how the evolution and development of vocal tract anatomy fit with our infant speech acquisition data and discuss the implications this has for explaining phonetic learning and for interpreting the biological evolution of the human vocal tract in relation to speech and speech acquisition.
  • Evans, N., Levinson, S. C., Gaby, A., & Majid, A. (2011). Introduction: Reciprocals and semantic typology. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 1-28). Amsterdam: Benjamins.

    Abstract

    Reciprocity lies at the heart of social cognition, and with it so does the encoding of reciprocity in language via reciprocal constructions. Despite the prominence of strong universal claims about the semantics of reciprocal constructions, there is considerable descriptive literature on the semantics of reciprocals that seems to indicate variable coding and subtle cross-linguistic differences in meaning of reciprocals, both of which would make it impossible to formulate a single, essentialising definition of reciprocal semantics. These problems make it vital for studies in the semantic typology of reciprocals to employ methodologies that allow the relevant categories to emerge objectively from cross-linguistic comparison of standardised stimulus materials. We situate the rationale for the 20-language study that forms the basis for this book within this empirical approach to semantic typology, and summarise some of the findings.

    Files private

    Request files
  • Fawcett, C., & Liszkowski, U. (2015). Social referencing during infancy and early childhood across cultures. In J. D. Wright (Ed.), International encyclopedia of the social & behavioral sciences (2nd ed., pp. 556-562). doi:10.1016/B978-0-08-097086-8.23169-3.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This paper shows that the dictation task, a well-
    known testing instrument in language education, has
    untapped potential as a research tool for studying
    speech perception. We describe how transcriptions
    can be scored on measures of lexical, orthographic,
    phonological, and semantic similarity to target
    phrases to provide comprehensive information about
    accuracy at different processing levels. The former
    three measures are automatically extractable,
    increasing objectivity, and the middle two are
    gradient, providing finer-grained information than
    traditionally used. We evaluate the measures in an
    English dictation task featuring phonetically reduced
    continuous speech. Whereas the lexical and
    orthographic measures emphasize listeners’ word
    identification difficulties, the phonological measure
    demonstrates that listeners can often still recover
    phonological features, and the semantic measure
    captures their ability to get the gist of the utterances.
    Correlational analyses and a discussion of practical
    and theoretical considerations show that combining
    multiple measures improves the dictation task’s
    utility as a research tool.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.

    Abstract

    Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation.
  • Fikkert, P., & Chen, A. (2011). The role of word-stress and intonation in word recognition in Dutch 14- and 24-month-olds. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th annual Boston University Conference on Language Development (pp. 222-232). Somerville, MA: Cascadilla Press.
  • Filippi, P. (2015). Before Babel: The Evolutionary Roots of Human Language. In E. Velmezova, K. Kull, & S. J. Cowley (Eds.), Biosemiotic Perspectives on Language and Linguistics (pp. 191-204). Springer International Publishing. doi:10.1007/978-3-319-20663-9_10.

    Abstract

    The aim of the present work is to identify the evolutionary origins of the ability to speak and understand a natural language. I will adopt Botha’s “Windows Approach” (Language and Communication, 2006, 26, pp. 129–143) in order to justify the following two assumptions, which concern the evolutionary continuity between human language and animals’ communication systems: (a) despite the uniqueness of human language in sharing and conveying utterances with an open-ended structure, some isolated components of our linguistic competence are shared with non- human primates, grounding a line of evolutionary continuity; (b) the very first “linguistic” utterances were holistic, that is, whole bunches of sounds able to convey information despite their lack of modern syntax. I will address such suppositions through the comparative analysis of three constitutive features of human language: syntax, the semantic value of utterances, and the ability to attribute mental states to conspecifics, i.e. the theory of mind.
  • Fisher, S. E. (2013). Building bridges between genes, brains and language. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 425-454). Cambridge, Mass: MIT Press.
  • Fisher, S. E. (2019). Key issues and future directions: Genes and language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 609-620). Cambridge, MA: MIT Press.
  • Fisher, S. E. (2006). How can animal studies help to uncover the roles of genes implicated in human speech and language disorders? In G. S. Fisch, & J. Flint (Eds.), Transgenic and knockout models of neuropsychiatric disorders (pp. 127-149). Totowa, NJ: Humana Press.

    Abstract

    The mysterious human propensity for acquiring speech and language has fascinated scientists for decades. A substantial body of evidence suggests that this capacity is rooted in aspects of neurodevelopment that are specified at the genomic level. Researchers have begun to identify genetic factors that increase susceptibility to developmental disorders of speech and language, thereby offering the first molecular entry points into neuronal mechanisms underlying human vocal communication. The identification of genetic variants influencing language acquisition facilitates the analysis of animal models in which the corresponding orthologs are disrupted. At face value, the situation raises aperplexing question: if speech and language are uniquely human, can any relevant insights be gained from investigations of gene function in other species? This chapter addresses the question using the example of FOXP2, a gene implicated in a severe monogenic speech and language disorder. FOXP2 encodes a transcription factor that is highly conserved in vertebrate species, both in terms of protein sequence and expression patterns. Current data suggest that an earlier version of this gene, present in the common ancestor of humans, rodents, and birds, was already involved in establishing neuronal circuits underlying sensory-motor integration and learning of complex motor sequences. This may have represented one of the factors providing a permissive neural environment for subsequent evolution of vocal learning. Thus, dissection of neuromolecular pathways regulated by Foxp2 in nonlinguistic species is a necessary prerequisite for understanding the role of the human version of the gene in speech and language.
  • Fisher, S. E. (2015). Translating the genome in human neuroscience. In G. Marcus, & J. Freeman (Eds.), The future of the brain: Essays by the world's leading neuroscientists (pp. 149-159). Princeton, NJ: Princeton University Press.
  • Fisher, V. J. (2017). Dance as Embodied Analogy: Designing an Empirical Research Study. In M. Van Delft, J. Voets, Z. Gündüz, H. Koolen, & L. Wijers (Eds.), Danswetenschap in Nederland. Utrecht: Vereniging voor Dansonderzoek (VDO).
  • Fitz, H., Chang, F., & Christansen, M. H. (2011). A connectionist account of the acquisition and processing of relative clauses. In E. Kidd (Ed.), The acquisition of relative clauses. Processing, typology and function (pp. 39-60). Amsterdam: Benjamins.

    Abstract

    Relative clause processing depends on the grammatical role of the head noun in the subordinate clause. This has traditionally been explained in terms of cognitive limitations. We suggest that structure-related processing differences arise from differences in experience with these structures. We present a connectionist model which learns to produce utterances with relative clauses from exposure to message-sentence pairs. The model shows how various factors such as frequent subsequences, structural variations, and meaning conspire to create differences in the processing of these structures. The predictions of this learning-based account have been confirmed in behavioral studies with adults. This work shows that structural regularities that govern relative clause processing can be explained within a usage-based approach to recursion.
  • Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 897-902). Austin, TX: Cognitive Science Society.

    Abstract

    Language acquisition involves learning nonadjacent dependencies that can obtain between words in a sentence. Several artificial grammar learning studies have shown that the ability of adults and children to detect dependencies between A and B in frames AXB is influenced by the amount of variation in the X element. This paper presents a model of statistical learning which displays similar behavior on this task and generalizes in a human-like way. The model was also used to predict human behavior for increased distance and more variation in dependencies. We compare our model-based approach with the standard invariance account of the variability effect.
  • Fitz, H. (2006). Church's thesis and physical computation. In A. Olszewski, J. Wolenski, & R. Janusz (Eds.), Church's Thesis after 70 years (pp. 175-219). Frankfurt a. M: Ontos Verlag.
  • Flecken, M., & Gerwien, J. (2013). Grammatical aspect modulates event duration estimations: findings from Dutch. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 2309-2314). Austin,TX: Cognitive Science Society.
  • Floyd, S., & Bruil, M. (2011). Interactional functions as part of the grammar: The suffix –ba in Cha’palaa. In P. K. Austin, O. Bond, D. Nathan, & L. Marten (Eds.), Proceedings of the 3rd Conference on Language Description and Theory (pp. 91-100). London: SOAS.
  • Floyd, S. (2017). Requesting as a means for negotiating distributed agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 67-78). Oxford: Oxford University Press.
  • Floyd, S. (2006). The cash value of style in the Andean market. In E.-X. Lee, K. M. Markman, V. Newdick, & T. Sakuma (Eds.), SALSA 13: Texas Linguistic Forum vol. 49. Austin, TX: Texas Linguistics Forum.

    Abstract

    This paper examines code and style shifting during sales transactions based on two market case studies from highland Ecuador. Bringing together ideas of linguistic economy with work on stylistic variation and ethnohistorical research on Andean markets, I study bartering, market calls and sales pitches to show how sellers create stylistic performances distinguished by contrasts of code, register and poetic features. The interaction of the symbolic value of language with the economic values of the market presents a place to examine the relationship between discourse and the material world.
  • Floyd, S. (2013). Semantic transparency and cultural calquing in the Northwest Amazon. In P. Epps, & K. Stenzel (Eds.), Upper Rio Negro: Cultural and linguistic interaction in northwestern Amazonia (pp. 271-308). Rio de Janiero: Museu do Indio. Retrieved from http://www.museunacional.ufrj.br/ppgas/livros_ele.html.

    Abstract

    The ethnographic literature has sometimes described parts of the northwest Amazon as areas of shared culture across linguistic groups. This paper illustrates how a principle of semantic transparency across languages is a key means of establishing elements of a common regional culture through practices like the calquing of ethnonyms and toponyms so that they are semantically, but not phonologically, equivalent across languages. It places the upper Rio Negro area of the northwest Amazon in a general discussion of cross-linguistic naming practices in South America and considers the extent to which a preference for semantic transparency can be linked to cases of widespread cultural ‘calquing’, in which culturally-important meanings are kept similar across different linguistic systems. It also addresses the principle of semantic transparency beyond specific referential phrases and into larger discourse structures. It concludes that an attention to semiotic practices in multilingual settings can provide new and more complex ways of thinking about the idea of shared culture.
  • Fox, E. (2020). Literary Jerry and justice. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Francks, C. (2019). The genetic bases of brain lateralization. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 595-608). Cambridge, MA: MIT Press.
  • Frank, S. L., Monaghan, P., & Tsoukala, C. (2019). Neural network models of language acquisition and processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 277-293). Cambridge, MA: MIT Press.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. In Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow: the University of Glasgow.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if speech production targets are specified in auditory
    terms, people with better auditory acuity should have more precise speech targets.
    To investigate this, we had participants perform speech perception and production tasks in a counterbalanced order. To assess speech perception acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording.
    We predicted that speech production variability to correlate inversely with discrimination performance.
    The results suggest that people do vary in their production and perceptual abilities, and that better discriminators have more distinctive vowel production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and sheds light on speech production-perception interaction.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Frost, R. L. A., Isbilen, E. S., Christiansen, M. H., & Monaghan, P. (2019). Testing the limits of non-adjacent dependency learning: Statistical segmentation and generalisation across domains. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1787-1793). Montreal, QB: Cognitive Science Society.

    Abstract

    Achieving linguistic proficiency requires identifying words from speech, and discovering the constraints that govern the way those words are used. In a recent study of non-adjacent dependency learning, Frost and Monaghan (2016) demonstrated that learners may perform these tasks together, using similar statistical processes - contrary to prior suggestions. However, in their study, non-adjacent dependencies were marked by phonological cues (plosive-continuant-plosive structure), which may have influenced learning. Here, we test the necessity of these cues by comparing learning across three conditions; fixed phonology, which contains these cues, varied phonology, which omits them, and shapes, which uses visual shape sequences to assess the generality of statistical processing for these tasks. Participants segmented the sequences and generalized the structure in both auditory conditions, but learning was best when phonological cues were present. Learning was around chance on both tasks for the visual shapes group, indicating statistical processing may critically differ across domains.
  • Frost, R. L. A., & Monaghan, P. (2020). Insights from studying statistical learning. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 65-89). Amsterdam: John Benjamins. doi:10.1075/tilar.27.03fro.

    Abstract

    Acquiring language is notoriously complex, yet for the majority of children this feat is accomplished with remarkable ease. Usage-based accounts of language acquisition suggest that this success can be largely attributed to the wealth of experience with language that children accumulate over the course of language acquisition. One field of research that is heavily underpinned by this principle of experience is statistical learning, which posits that learners can perform powerful computations over the distribution of information in a given input, which can help them to discern precisely how that input is structured, and how it operates. A growing body of work brings this notion to bear in the field of language acquisition, due to a developing understanding of the richness of the statistical information contained in speech. In this chapter we discuss the role that statistical learning plays in language acquisition, emphasising the importance of both the distribution of information within language, and the situation in which language is being learnt. First, we address the types of statistical learning that apply to a range of language learning tasks, asking whether the statistical processes purported to support language learning are the same or distinct across different tasks in language acquisition. Second, we expand the perspective on what counts as environmental input, by determining how statistical learning operates over the situated learning environment, and not just sequences of sounds in utterances. Finally, we address the role of variability in children’s input, and examine how statistical learning can accommodate (and perhaps even exploit) this during language acquisition.
  • De La Fuente, J., Casasanto, D., Román, A., & Santiago, J. (2011). Searching for cultural influences on the body-specific association of preferred hand and emotional valence. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2616-2620). Austin, TX: Cognitive Science Society.
  • Furman, R., & Ozyurek, A. (2006). The use of discourse markers in adult and child Turkish oral narratives: Şey, yani and işte. In S. Yagcioglu, & A. Dem Deger (Eds.), Advances in Turkish linguistics (pp. 467-480). Izmir: Dokuz Eylul University Press.
  • Furman, R., Ozyurek, A., & Allen, S. E. M. (2006). Learning to express causal events across languages: What do speech and gesture patterns reveal? In D. Bamman, T. Magnitskaia, & C. Zaller (Eds.), Proceedings of the 30th Annual Boston University Conference on Language Development (pp. 190-201). Somerville, Mass: Cascadilla Press.
  • Fusaroli, R., Tylén, K., Garly, K., Steensig, J., Christiansen, M. H., & Dingemanse, M. (2017). Measures and mechanisms of common ground: Backchannels, conversational repair, and interactive alignment in free and task-oriented social interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2055-2060). Austin, TX: Cognitive Science Society.

    Abstract

    A crucial aspect of everyday conversational interactions is our ability to establish and maintain common ground. Understanding the relevant mechanisms involved in such social coordination remains an important challenge for cognitive science. While common ground is often discussed in very general terms, different contexts of interaction are likely to afford different coordination mechanisms. In this paper, we investigate the presence and relation of three mechanisms of social coordination – backchannels, interactive alignment and conversational repair – across free and task-oriented conversations. We find significant differences: task-oriented conversations involve higher presence of repair – restricted offers in particular – and backchannel, as well as a reduced level of lexical and syntactic alignment. We find that restricted repair is associated with lexical alignment and open repair with backchannels. Our findings highlight the need to explicitly assess several mechanisms at once and to investigate diverse activities to understand their role and relations.
  • Galke, L., Vagliano, I., & Scherp, A. (2019). Can graph neural networks go „online“? An analysis of pretraining and inference. In Proceedings of the Representation Learning on Graphs and Manifolds: ICLR2019 Workshop.

    Abstract

    Large-scale graph data in real-world applications is often not static but dynamic,
    i. e., new nodes and edges appear over time. Current graph convolution approaches
    are promising, especially, when all the graph’s nodes and edges are available dur-
    ing training. When unseen nodes and edges are inserted after training, it is not
    yet evaluated whether up-training or re-training from scratch is preferable. We
    construct an experimental setup, in which we insert previously unseen nodes and
    edges after training and conduct a limited amount of inference epochs. In this
    setup, we compare adapting pretrained graph neural networks against retraining
    from scratch. Our results show that pretrained models yield high accuracy scores
    on the unseen nodes and that pretraining is preferable over retraining from scratch.
    Our experiments represent a first step to evaluate and develop truly online variants
    of graph neural networks.
  • Galke, L., Melnychuk, T., Seidlmayer, E., Trog, S., Foerstner, K., Schultz, C., & Tochtermann, K. (2019). Inductive learning of concept representations from library-scale bibliographic corpora. In K. David, K. Geihs, M. Lange, & G. Stumme (Eds.), Informatik 2019: 50 Jahre Gesellschaft für Informatik - Informatik für Gesellschaft (pp. 219-232). Bonn: Gesellschaft für Informatik e.V. doi:10.18420/inf2019_26.
  • Galke, L., Mai, F., Schelten, A., Brunch, D., & Scherp, A. (2017). Using titles vs. full-text as source for automated semantic document annotation. In O. Corcho, K. Janowicz, G. Rizz, I. Tiddi, & D. Garijo (Eds.), Proceedings of the 9th International Conference on Knowledge Capture (K-CAP 2017). New York: ACM.

    Abstract

    We conduct the first systematic comparison of automated semantic
    annotation based on either the full-text or only on the title metadata
    of documents. Apart from the prominent text classification baselines
    kNN and SVM, we also compare recent techniques of Learning
    to Rank and neural networks and revisit the traditional methods
    logistic regression, Rocchio, and Naive Bayes. Across three of our
    four datasets, the performance of the classifications using only titles
    reaches over 90% of the quality compared to the performance when
    using the full-text.
  • Galke, L., Saleh, A., & Scherp, A. (2017). Word embeddings for practical information retrieval. In M. Eibl, & M. Gaedke (Eds.), INFORMATIK 2017 (pp. 2155-2167). Bonn: Gesellschaft für Informatik. doi:10.18420/in2017_215.

    Abstract

    We assess the suitability of word embeddings for practical information retrieval scenarios. Thus, we assume that users issue ad-hoc short queries where we return the first twenty retrieved documents after applying a boolean matching operation between the query and the documents. We compare the performance of several techniques that leverage word embeddings in the retrieval models to compute the similarity between the query and the documents, namely word centroid similarity, paragraph vectors, Word Mover’s distance, as well as our novel inverse document frequency (IDF) re-weighted word centroid similarity. We evaluate the performance using the ranking metrics mean average precision, mean reciprocal rank, and normalized discounted cumulative gain. Additionally, we inspect the retrieval models’ sensitivity to document length by using either only the title or the full-text of the documents for the retrieval task. We conclude that word centroid similarity is the best competitor to state-of-the-art retrieval models. It can be further improved by re-weighting the word frequencies with IDF before aggregating the respective word vectors of the embedding. The proposed cosine similarity of IDF re-weighted word vectors is competitive to the TF-IDF baseline and even outperforms it in case of the news domain with a relative percentage of 15%.
  • Gazendam, L., Malaisé, V., Schreiber, G., & Brugman, H. (2006). Deriving semantic annotations of an audiovisual program from contextual texts. In First International Workshop on Semantic Web Annotations for Multimedia (SWAMM 2006).

    Abstract

    The aim of this paper is to explore whether indexing terms for an audiovisual program can be derived from contextual texts automatically. For this we apply natural-language processing techniques to contextual texts of two Dutch TV-programs. We use a Dutch domain thesaurus to derive possible metadata. This possible metadata is ranked by an algorithm which uses the relations of the thesaurus. We evaluate the results by comparing them to human made descriptions.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). Automatic sign language identification. In Proceeding of the 20th IEEE International Conference on Image Processing (ICIP) (pp. 2626-2630).

    Abstract

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy using only low-level visual features.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). Automatic signer diarization - the mover is the signer approach. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on (pp. 283-287). doi:10.1109/CVPRW.2013.49.

    Abstract

    We present a vision-based method for signer diarization -- the task of automatically determining "who signed when?" in a video. This task has similar motivations and applications as speaker diarization but has received little attention in the literature. In this paper, we motivate the problem and propose a method for solving it. The method is based on the hypothesis that signers make more movements than their interlocutors. Experiments on four videos (a total of 1.4 hours and each consisting of two signers) show the applicability of the method. The best diarization error rate (DER) obtained is 0.16.
  • Gebre, B. G., Zampieri, M., Wittenburg, P., & Heskes, T. (2013). Improving Native Language Identification with TF-IDF weighting. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 216-223).

    Abstract

    This paper presents a Native Language Identification (NLI) system based on TF-IDF weighting schemes and using linear classifiers - support vector machines, logistic regressions and perceptrons. The system was one of the participants of the 2013 NLI Shared Task in the closed-training track, achieving 0.814 overall accuracy for a set of 11 native languages. This accuracy was only 2.2 percentage points lower than the winner's performance. Furthermore, with subsequent evaluations using 10-fold cross-validation (as given by the organizers) on the combined training and development data, the best average accuracy obtained is 0.8455 and the features that contributed to this accuracy are the TF-IDF of the combined unigrams and bigrams of words.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). The gesturer is the speaker. In Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013) (pp. 3751-3755).

    Abstract

    We present and solve the speaker diarization problem in a novel way. We hypothesize that the gesturer is the speaker and that identifying the gesturer can be taken as identifying the active speaker. We provide evidence in support of the hypothesis from gesture literature and audio-visual synchrony studies. We also present a vision-only diarization algorithm that relies on gestures (i.e. upper body movements). Experiments carried out on 8.9 hours of a publicly available dataset (the AMI meeting data) show that diarization error rates as low as 15% can be achieved.
  • Gijssels, T., Bottini, R., Rueschemeyer, S.-A., & Casasanto, D. (2013). Space and time in the parietal cortex: fMRI Evidence for a meural asymmetry. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 495-500). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0113/index.html.

    Abstract

    How are space and time related in the brain? This study contrasts two proposals that make different predictions about the interaction between spatial and temporal magnitudes. Whereas ATOM implies that space and time are symmetrically related, Metaphor Theory claims they are asymmetrically related. Here we investigated whether space and time activate the same neural structures in the inferior parietal cortex (IPC) and whether the activation is symmetric or asymmetric across domains. We measured participants’ neural activity while they made temporal and spatial judgments on the same visual stimuli. The behavioral results replicated earlier observations of a space-time asymmetry: Temporal judgments were more strongly influenced by irrelevant spatial information than vice versa. The BOLD fMRI data indicated that space and time activated overlapping clusters in the IPC and that, consistent with Metaphor Theory, this activation was asymmetric: The shared region of IPC was activated more strongly during temporal judgments than during spatial judgments. We consider three possible interpretations of this neural asymmetry, based on 3 possible functions of IPC.
  • Gillespie, K., & San Roque, L. (2011). Music and language in Duna pikono. In A. Rumsey, & D. Niles (Eds.), Sung tales from the Papua New Guinea Highlands: Studies in form, meaning and sociocultural context (pp. 49-63). Canberra: ANU E Press.
  • Goldrick, M., Brehm, L., Pyeong Whan, C., & Smolensky, P. (2019). Transient blend states and discrete agreement-driven errors in sentence production. In G. J. Snover, M. Nelson, B. O'Connor, & J. Pater (Eds.), Proceedings of the Society for Computation in Linguistics (SCiL 2019) (pp. 375-376). doi:10.7275/n0b2-5305.
  • Goudbeek, M., & Swingley, D. (2006). Saliency effects in distributional learning. In Proceedings of the 11th Australasian International Conference on Speech Science and Technology (pp. 478-482). Auckland: Australasian Speech Science and Technology Association.

    Abstract

    Acquiring the sounds of a language involves learning to recognize distributional patterns present in the input. We show that among adult learners, this distributional learning of auditory categories (which are conceived of here as probability density functions in a multidimensional space) is constrained by the salience of the dimensions that form the axes of this perceptual space. Only with a particular ratio of variation in the perceptual dimensions was category learning driven by the distributional properties of the input.
  • Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2017). Auditory and phonetic category formation. In H. Cohen, & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (2nd revised ed.) (pp. 687-708). Amsterdam: Elsevier.
  • Güldemann, T., & Hammarström, H. (2020). Geographical axis effects in large-scale linguistic distributions. In M. Crevels, & P. Muysken (Eds.), Language Dispersal, Diversification, and Contact. Oxford: Oxford University Press.
  • Gullberg, M. (2011). Multilingual multimodality: Communicative difficulties and their solutions in second-language use. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and body in the material world (pp. 137-151). Cambridge: Cambridge University Press.

    Abstract

    Using a poorly mastered second language (L2) in interaction with a native speaker is a challenging task. This paper explores how L2 speakers and their native interlocutors together deploy gestures and speech to sustain problematic interaction. Drawing on native and non-native interactions in Swedish, French, and Dutch, I examine lexical, grammatical and interaction-related problems in turn. The analyses reveal that (a) different problems yield behaviours with different formal and interactive properties that are common across the language pairs and the participant roles; (b) native and non-native behaviour differs in degree, not in kind; and (c) that individual communicative style determines behaviour more than the gravity of the linguistic problem. I discuss the implications for theories opposing 'efficient' L2 communication to learning. Also, contra the traditional view of compensatory gestures, I will argue for a multi-functional 'hydraulic' view grounded in gesture theory where speech and gesture are equal partners, but where the weight carried by the modalities shifts depending on expressive pressures.
  • Gullberg, M. (2011). Language-specific encoding of placement events in gestures. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 166-188). New York: Cambridge University Press.

    Abstract

    This study focuses on the effect of the semantics of placement verbs on placement event representations. Specifically, it explores to what extent the semantic properties of habitually used verbs guide attention to certain types of spatial information. French, which typically uses a general placement verb (mettre, 'put'), is contrasted with Dutch, which uses a set of fine-grained (semi-)obligatory posture verbs (zetten, leggen, 'set/stand', 'lay'). Analysis of the concomitant gesture production in the two languages reveals a patterning toward two distinct, language-specific event representations. The object being placed is an essential part of the Dutch representation, while French speakers instead focus only on the (path of the) placement movement. These perspectives permeate the entire placement domain regardless of the actual verb used.
  • Gullberg, M. (2011). Thinking, speaking, and gesturing about motion in more than one language. In A. Pavlenko (Ed.), Thinking and speaking in two languages (pp. 143-169). Bristol: Multilingual Matters.

    Abstract

    A key problem in studies of bilingual linguistic cognition is how to probe the details of underlying representations in order to gauge whether bilinguals' conceptualizations differ from those of monolinguals, and if so how. This chapter provides an overview of a line of studies that rely on speech-associated gestures to explore these issues. The gestures of adult monolingual native speakers differ systematically across languages, reflecting consistent differences in what information is selected for expression and how it is mapped onto morphosyntactic devices. Given such differences, gestures can provide more detailed information on how multilingual speakers conceptualize events treated differently in their respective languages, and therefore, ultimately, on the nature of their representations. This chapter reviews a series of studies in the domain of (voluntary and caused) motion event construal. I first discuss speech and gesture evidence for different construals in monolingual native speakers, then review studies on second language speakers showing gestural evidence of persistent L1 construals, shifts to L2 construals, and of bidirectional influences. I consider the implications for theories of ultimate attainment in SLA, transfer and convergence. I will also discuss the methodological implications, namely what gesture data do and do not reveal about linguistic conceptualisation and linguistic relativity proper.
  • Gussenhoven, C., & Zhou, W. (2013). Revisiting pitch slope and height effects on perceived duration. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 1365-1369).

    Abstract

    The shape of pitch contours has been shown to have an effect on the perceived duration of vowels. For instance, vowels with high level pitch and vowels with falling contours sound longer than vowels with low level pitch. Depending on whether the
    comparison is between level pitches or between level and dynamic contours, these findings have been interpreted in two ways. For inter-level comparisons, where the duration results are the reverse of production results, a hypercorrection strategy in production has been proposed [1]. By contrast, for comparisons between level pitches and dynamic contours, the
    longer production data for dynamic contours have been held responsible. We report an experiment with Dutch and Chinese listeners which aimed to show that production data and perception data are each other’s opposites for high, low, falling and rising contours. We explain the results, which are consistent with earlier findings, in terms of the compensatory listening strategy of [2], arguing that the perception effects are due to a perceptual compensation of articulatory strategies and
    constraints, rather than that differences in production compensate for psycho-acoustic perception effects.
  • Hagoort, P. (2006). On Broca, brain and binding. In Y. Grodzinsky, & K. Amunts (Eds.), Broca's region (pp. 240-251). Oxford: Oxford University Press.
  • Hagoort, P. (2017). It is the facts, stupid. In J. Brockman, F. Van der Wa, & H. Corver (Eds.), Wetenschappelijke parels: het belangrijkste wetenschappelijke nieuws volgens 193 'briljante geesten'. Amsterdam: Maven Press.
  • Hagoort, P. (2011). The binding problem for language, and its consequences for the neurocognition of comprehension. In E. A. Gibson, & N. J. Pearlmutter (Eds.), The processing and acquisition of reference (pp. 403-436). Cambridge, MA: MIT Press.
  • Hagoort, P. (2011). The neuronal infrastructure for unification at multiple levels. In G. Gaskell, & P. Zwitserlood (Eds.), Lexical representation: A multidisciplinary approach (pp. 231-242). Berlin: De Gruyter Mouton.
  • Hagoort, P. (2006). Het zwarte gat tussen brein en bewustzijn. In J. Janssen, & J. Van Vugt (Eds.), Brein en bewustzijn: Gedachtensprongen tussen hersenen en mensbeeld (pp. 9-24). Damon: Nijmegen.
  • Hagoort, P., & Beckmann, C. F. (2019). Key issues and future directions: The neural architecture for language. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 527-532). Cambridge, MA: MIT Press.
  • Hagoort, P. (2015). Het talige brein. In A. Aleman, & H. E. Hulshoff Pol (Eds.), Beeldvorming van het brein: Imaging voor psychiaters en psychologen (pp. 169-176). Utrecht: De Tijdstroom.
  • Hagoort, P. (2019). Introduction. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 1-6). Cambridge, MA: MIT Press.
  • Hagoort, P. (2015). Spiegelneuronen. In J. Brockmann (Ed.), Wetenschappelijk onkruid: 179 hardnekkige ideeën die vooruitgang blokkeren (pp. 455-457). Amsterdam: Maven Publishing.
  • Hagoort, P. (2020). Taal. In O. Van den Heuvel, Y. Van der Werf, B. Schmand, & B. Sabbe (Eds.), Leerboek neurowetenschappen voor de klinische psychiatrie (pp. 234-239). Amsterdam: Boom Uitgevers.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P., & Poeppel, D. (2013). The infrastructure of the language-ready brain. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 233-255). Cambridge, MA: MIT Press.

    Abstract

    This chapter sketches in very general terms the cognitive architecture of both language comprehension and production, as well as the neurobiological infrastructure that makes the human brain ready for language. Focus is on spoken language, since that compares most directly to processing music. It is worth bearing in mind that humans can also interface with language as a cognitive system using sign and text (visual) as well as Braille (tactile); that is to say, the system can connect with input/output processes in any sensory modality. Language processing consists of a complex and nested set of subroutines to get from sound to meaning (in comprehension) or meaning to sound (in production), with remarkable speed and accuracy. The fi rst section outlines a selection of the major constituent operations, from fractionating the input into manageable units to combining and unifying information in the construction of meaning. The next section addresses the neurobiological infrastructure hypothesized to form the basis for language processing. Principal insights are summarized by building on the notion of “brain networks” for speech–sound processing, syntactic processing, and the construction of meaning, bearing in mind that such a neat three-way subdivision overlooks important overlap and shared mechanisms in the neural architecture subserving language processing. Finally, in keeping with the spirit of the volume, some possible relations are highlighted between language and music that arise from the infrastructure developed here. Our characterization of language and its neurobiological foundations is necessarily selective and brief. Our aim is to identify for the reader critical questions that require an answer to have a plausible cognitive neuroscience of language processing.
  • Hagoort, P. (2017). The neural basis for primary and acquired language skills. In E. Segers, & P. Van den Broek (Eds.), Developmental Perspectives in Written Language and Literacy: In honor of Ludo Verhoeven (pp. 17-28). Amsterdam: Benjamins. doi:10.1075/z.206.02hag.

    Abstract

    Reading is a cultural invention that needs to recruit cortical infrastructure that was not designed for it (cultural recycling of cortical maps). In the case of reading both visual cortex and networks for speech processing are recruited. Here I discuss current views on the neurobiological underpinnings of spoken language that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca’s and Wernicke’s region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication. The consequences of this architecture for reading are discussed.
  • Hahn, L. E., Ten Buuren, M., De Nijs, M., Snijders, T. M., & Fikkert, P. (2019). Acquiring novel words in a second language through mutual play with child songs - The Noplica Energy Center. In L. Nijs, H. Van Regenmortel, & C. Arculus (Eds.), MERYC19 Counterpoints of the senses: Bodily experiences in musical learning (pp. 78-87). Ghent, Belgium: EuNet MERYC 2019.

    Abstract

    Child songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring child songs in a playhouse. We present data from three studies that serve as scientific proof for the functionality of one game of the playhouse: the Energy Center. For this game, three hand-bikes were mounted on a panel. When children start moving the hand-bikes, child songs start playing simultaneously. Once the children produce enough energy with the hand-bikes, the songs are additionally accompanied with the sounds of musical instruments. In our studies, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our studies were run in the field, one at a Dutch and one at an Indian pre-school. The third study features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from songs they heard. We will pick up upon these promising observations during future studies
  • Hall-Lew, L., Fairs, A., & Lew, A. D. (2015). Tourists' Attitudes towards Linguistic Variation in Scotland. In E. Togersen, S. Hårstad, B. Maehlum, & U. Røyneland (Eds.), Language Variation - European Perspectives V (pp. 99-110). Amsterdam: Benjamins.

    Abstract

    This paper joins studies of linguistic variation (e.g. Labov 1972; Dubois & Horvath 2000) and discourse (e.g. Jaworski & Lawson 2005; Jaworski & Pritchard 2005; Thurlow & Jaworski 2010) that consider the intersection between language and tourism. By examining the language attitudes that tourists hold toward linguistic variability in their host community, we find that attitudes differ by context and with respect to tourists’ travel motivations. We suggest that these results are particularly likely in a context like Edinburgh, Scotland, where linguistic variation has an iconic link to place authenticity. We propose that the joint commodification of ‘intelligibility’ and ‘authenticity’ explains this variability. The results raise questions about how the commodity value of travel motivation and the associated context of language use influence language attitudes.
  • Hammarström, H. (2011). Automatic annotation of bibliographical references for descriptive language materials. In P. Forner, J. Kekäläinen, M. Lalmas, & M. De Rijke (Eds.), Multilingual and multimodal information access evaluation. Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011, Amsterdam, The Netherlands, September 19-22, 2011; Proceedings (pp. 62-73). Berlin: Springer.

    Abstract

    The present paper considers the problem of annotating bibliographical references with labels/classes, given training data of references already annotated with labels. The problem is an instance of document categorization where the documents are short and written in a wide variety of languages. The skewed distributions of title words and labels calls for special carefulness when choosing a Machine Learning approach. The present paper describes how to induce Disjunctive Normal Form formulae (DNFs), which have several advantages over Decision Trees. The approach is evaluated on a large real-world collection of bibliographical references.
  • Hammarström, H., & O'Connor, L. (2013). Dependency sensitive typological distance. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 337-360). Berlin: Mouton de Gruyter.
  • Hammarström, H. (2019). An inventory of Bantu languages. In M. Van de Velde, K. Bostoen, D. Nurse, & G. Philippson (Eds.), The Bantu languages (2nd). London: Routledge.

    Abstract

    This chapter aims to provide an updated list of all Bantu languages known at present and to provide individual pointers to further information on the inventory. The area division has some correlation with what are perceived genealogical relations between Bantu languages, but they are not defined as such and do not change whenever there is an update in our understanding of genealogical relations. Given the popularity of Guthrie codes in Bantu linguistics, our listing also features a complete mapping to Guthrie codes. The language inventory listed excludes sign languages used in the Bantu area, speech registers, pidgins, drummed/whistled languages and urban youth languages. Pointers to such languages in the Bantu area are included in the continent-wide overview in Hammarstrom. The most important alternative names, subvarieties and spelling variants are given for each language, though such lists are necessarily incomplete and reflect some degree of arbitrary selection.
  • Hammarström, H. (2015). Glottolog: A free, online, comprehensive bibliography of the world's languages. In E. Kuzmin (Ed.), Proceedings of the 3rd International Conference on Linguistic and Cultural Diversity in Cyberspace (pp. 183-188). Moscow: UNESCO.
  • Hammarström, H. (2013). Noun class parallels in Kordofanian and Niger-Congo: Evidence of genealogical inheritance? In T. C. Schadeberg, & R. M. Blench (Eds.), Nuba Mountain Language Studies (pp. 549-570). Köln: Köppe.
  • Hanique, I., & Ernestus, M. (2011). Final /t/ reduction in Dutch past-participles: The role of word predictability and morphological decomposability. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2849-2852).

    Abstract

    This corpus study demonstrates that the realization of wordfinal /t/ in Dutch past-participles in various speech styles is affected by a word’s predictability and paradigmatic relative frequency. In particular, /t/s are shorter and more often absent if the two preceding words are more predictable. In addition, /t/s, especially in irregular verbs, are more reduced, the lower the verb’s lemma frequency relative to the past-participle’s frequency. Both effects are more pronounced in more spontaneous speech. These findings are expected if speech planning plays an important role in speech reduction. Index Terms: pronunciation variation, acoustic reduction, corpus research, word predictability, morphological decomposability
  • Hanique, I., Aalders, E., & Ernestus, M. (2015). How robust are exemplar effects in word comprehension? In G. Jarema, & G. Libben (Eds.), Phonological and phonetic considerations of lexical processing (pp. 15-39). Amsterdam: Benjamins.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Harbusch, K., & Kempen, G. (2011). Automatic online writing support for L2 learners of German through output monitoring by a natural-language paraphrase generator. In M. Levy, F. Blin, C. Bradin Siskin, & O. Takeuchi (Eds.), WorldCALL: International perspectives on computer-assisted language learning (pp. 128-143). New York: Routledge.

    Abstract

    Students who are learning to write in a foreign language, often want feedback on the grammatical quality of the sentences they produce. The usual NLP approach to this problem is based on parsing student-generated text. Here, we propose a generation-based ap- proach aiming at preventing errors ("scaffolding"). In our ICALL system, the student constructs sentences by composing syntactic trees out of lexically anchored "treelets" via a graphical drag & drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree. It provides positive feedback if the student-composed tree belongs to the well-formed set, and negative feedback otherwise. If so requested by the student, it can substantiate the positive or negative feedback based on a comparison between the student-composed tree and its own trees (informative feedback on demand). In case of negative feedback, the system refuses to build the structure attempted by the student. Frequently occurring errors are handled in terms of "malrules." The system we describe is a prototype (implemented in JAVA and C++) which can be parameterized with respect to L1 and L2, the size of the lexicon, and the level of detail of the visually presented grammatical structures.
  • Harbusch, K., & Kempen, G. (2006). ELLEIPO: A module that computes coordinative ellipsis for language generators that don't. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006) (pp. 115-118).

    Abstract

    Many current sentence generators lack the ability to compute elliptical versions of coordinated clauses in accordance with the rules for Gapping, Forward and Backward Conjunction Reduction, and SGF (Subject Gap in clauses with Finite/ Fronted verb). We describe a module (implemented in JAVA, with German and Dutch as target languages) that takes non-elliptical coordinated clauses as input and returns all reduced versions licensed by coordinative ellipsis. It is loosely based on a new psycholinguistic theory of coordinative ellipsis proposed by Kempen. In this theory, coordinative ellipsis is not supposed to result from the application of declarative grammar rules for clause formation but from a procedural component that interacts with the sentence generator and may block the overt expression of certain constituents.
  • Harbusch, K., Kempen, G., Van Breugel, C., & Koch, U. (2006). A generation-oriented workbench for performance grammar: Capturing linear order variability in German and Dutch. In Proceedings of the 4th International Natural Language Generation Conference (pp. 9-11).

    Abstract

    We describe a generation-oriented workbench for the Performance Grammar (PG) formalism, highlighting the treatment of certain word order and movement constraints in Dutch and German. PG enables a simple and uniform treatment of a heterogeneous collection of linear order phenomena in the domain of verb constructions (variably known as Cross-serial Dependencies, Verb Raising, Clause Union, Extraposition, Third Construction, Particle Hopping, etc.). The central data structures enabling this feature are clausal “topologies”: one-dimensional arrays associated with clauses, whose cells (“slots”) provide landing sites for the constituents of the clause. Movement operations are enabled by unification of lateral slots of topologies at adjacent levels of the clause hierarchy. The PGW generator assists the grammar developer in testing whether the implemented syntactic knowledge allows all and only the well-formed permutations of constituents.
  • Harmon, Z., & Kapatsinski, V. (2015). Studying the dynamics of lexical access using disfluencies. In R. Lickley, & R. Eklund (Eds.), Proceedings of the 7th International Workshop on Disfluency in Spontaneous Speech (DiSS 2015) (pp. 41-44).

    Abstract

    Faced with planning problems related to lexical access, speakers take advantage of a major function of disfluencies: buying time. It is reasonable, then, to expect that the structure of disfluencies sheds light on the mechanisms underlying lexical access. Using data from the Switchboard Corpus, we investigated the effect of semantic competition during lexical access on repetition disfluencies. We hypothesized that the more time the speaker needs to access the following unit, the longer the repetition. We examined the repetitions preceding verbs and nouns and tested predictors influencing the accessibility of these items. Results suggest that speed of lexical access negatively correlates with the length of repetition and that the main determinants of lexical access speed differ for verbs and nouns. Longer disfluencies before verbs appear to be due to significant paradigmatic competition from semantically similar verbs. For nouns, they occur when the noun is relatively unpredictable given the preceding context.
  • Harmon, Z., & Kapatsinski, V. (2020). The best-laid plan of mice and men: Competition between top-down and preceding-item cues in plan execution. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 1674-1680). Montreal, QB: Cognitive Science Society.

    Abstract

    There is evidence that the process of executing a planned utterance involves the use of both preceding-context and top-down cues. Utterance-initial words are cued only by the top-down plan. In contrast, non-initial words are cued both by top-down cues and preceding-context cues. Co-existence of both cue types raises the question of how they interact during learning. We argue that this interaction is competitive: items that tend to be preceded by predictive preceding-context cues are harder to activate from the plan without this predictive context. A novel computational model of this competition is developed. The model is tested on a corpus of repetition disfluencies and shown to account for the influences on patterns of restarts during production. In particular, this model predicts a novel Initiation Effect: following an interruption, speakers re-initiate production from words that tend to occur in utterance-initial position, even when they are not initial in the interrupted utterance.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Haun, D. B. M. (2011). How odd I am! In M. Brockman (Ed.), Future science: Essays from the cutting edge (pp. 228-235). New York: Random House.

    Abstract

    Cross-culturally, the human mind varies more than we generally assume
  • Haun, D. B. M., & Over, H. (2013). Like me: A homophily-based account of human culture. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural Evolution: Society, technology, language, and religion (pp. 75-85). Cambridge, MA: MIT Press.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2011). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Reprint]. In S. Dehaene, & E. Brannon (Eds.), Space, time and number in the brain. Searching for the foundations of mathematical thought (pp. 191-206). London: Academic Press.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Hayano, K. (2011). Claiming epistemic primacy: Yo-marked assessments in Japanese. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 58-81). Cambridge: Cambridge University Press.
  • Hayano, K. (2013). Question design in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 395-414). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch19.

    Abstract

    This chapter contains sections titled: Introduction Questions Questioning and the Epistemic Gradient Presuppositions, Agenda Setting and Preferences Social Actions Implemented by Questions Questions as Building Blocks of Institutional Activities Future Directions
  • De Heer Kloots, M., Carlson, D., Garcia, M., Kotz, S., Lowry, A., Poli-Nardi, L., de Reus, K., Rubio-García, A., Sroka, M., Varola, M., & Ravignani, A. (2020). Rhythmic perception, production and interactivity in harbour and grey seals. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 59-62). Nijmegen: The Evolution of Language Conferences.
  • Heilbron, M., Ehinger, B., Hagoort, P., & De Lange, F. P. (2019). Tracking naturalistic linguistic predictions with deep neural language models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 424-427). doi:10.32470/CCN.2019.1096-0.

    Abstract

    Prediction in language has traditionally been studied using
    simple designs in which neural responses to expected
    and unexpected words are compared in a categorical
    fashion. However, these designs have been contested
    as being ‘prediction encouraging’, potentially exaggerating
    the importance of prediction in language understanding.
    A few recent studies have begun to address
    these worries by using model-based approaches to probe
    the effects of linguistic predictability in naturalistic stimuli
    (e.g. continuous narrative). However, these studies
    so far only looked at very local forms of prediction, using
    models that take no more than the prior two words into
    account when computing a word’s predictability. Here,
    we extend this approach using a state-of-the-art neural
    language model that can take roughly 500 times longer
    linguistic contexts into account. Predictability estimates
    fromthe neural network offer amuch better fit to EEG data
    from subjects listening to naturalistic narrative than simpler
    models, and reveal strong surprise responses akin to
    the P200 and N400. These results show that predictability
    effects in language are not a side-effect of simple designs,
    and demonstrate the practical use of recent advances
    in AI for the cognitive neuroscience of language.
  • Herbst, L. E. (2006). The influence of language dominance on bilingual VOT: A case study. In Proceedings of the 4th University of Cambridge Postgraduate Conference on Language Research (CamLing 2006) (pp. 91-98). Cambridge: Cambridge University Press.

    Abstract

    Longitudinally collected VOT data from an early English-Italian bilingual who became increasingly English-dominant was analyzed. Stops in English were always produced with significantly longer VOT than in Italian. However, the speaker did not show any significant change in the VOT production in either language over time, despite the clear dominance of English in his every day language use later in his life. The results indicate that – unlike L2 learners – early bilinguals may remain unaffected by language use with respect to phonetic realization.
  • Hill, C. (2011). Collaborative narration and cross-speaker repetition in Umpila and Kuuku Ya'u. In B. Baker, R. Gardner, M. Harvey, & I. Mushin (Eds.), Indigenous language and social identity: Papers in honour of Michael Walsh (pp. 237-260). Canberra: Pacific Linguistics.
  • Hintz, F., & Huettig, F. (2015). The complexity of the visual environment modulates language-mediated eye gaze. In R. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 39-55). Berlin: Springer. doi:10.1007/978-81-322-2443-3_3.

    Abstract

    Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search.
  • Hoeksema, N., Villanueva, S., Mengede, J., Salazar-Casals, A., Rubio-García, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2020). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 162-164). Nijmegen: The Evolution of Language Conferences.
  • Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences.
  • Hofmeister, P., & Norcliffe, E. (2013). Does resumption facilitate sentence comprehension? In P. Hofmeister, & E. Norcliffe (Eds.), The core and the periphery: Data-driven perspectives on syntax inspired by Ivan A. Sag (pp. 225-246). Stanford, CA: CSLI Publications.
  • Holler, J., Tutton, M., & Wilkin, K. (2011). Co-speech gestures in the process of meaning coordination. In Proceedings of the 2nd GESPIN - Gesture & Speech in Interaction Conference, Bielefeld, 5-7 Sep 2011.

    Abstract

    This study uses a classical referential communication task to
    investigate the role of co-speech gestures in the process of
    coordination. The study manipulates both the common ground between the interlocutors, as well as the visibility of the gestures they use. The findings show that co-speech gestures are an integral part of the referential utterances speakers
    produced with regard to both initial references as well as repeated references, and that the availability of gestures appears to impact on interlocutors’ referential oordination. The results are discussed with regard to past research on
    common ground as well as theories of gesture production.
  • Holler, J., & Stevens, R. (2006). How speakers represent size information in referential communication for knowing and unknowing recipients. In D. Schlangen, & R. Fernandez (Eds.), Brandial '06 Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, Germany, September 11-13.
  • Holler, J., & Bavelas, J. (2017). Multi-modal communication of common ground: A review of social functions. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 213-240). Amsterdam: Benjamins.

    Abstract

    Until recently, the literature on common ground depicted its influence as a purely verbal phenomenon. We review current research on how common ground influences gesture. With informative exceptions, most experiments found that speakers used fewer gestures as well as fewer words in common ground contexts; i.e., the gesture/word ratio did not change. Common ground often led to more poorly articulated gestures, which parallels its effect on words. These findings support the principle of recipient design as well as more specific social functions such as grounding, the given-new contract, and Grice’s maxims. However, conceptual pacts or linking old with new information may maintain the original form. All together, these findings implicate gesture-speech ensembles rather than isolated effects on gestures alone.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
    gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
    speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
    object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
    each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
    utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
    that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Huettig, F. (2011). The role of color during language-vision interactions. In R. K. Mishra, & N. Srinivasan (Eds.), Language-Cognition interface: State of the art (pp. 93-113). München: Lincom.
  • Huettig, F. (2013). Young children’s use of color information during language-vision mapping. In B. R. Kar (Ed.), Cognition and brain development: Converging evidence from various methodologies (pp. 368-391). Washington, DC: American Psychological Association Press.

Share this page