Han Sloetjes

Presentations

Displaying 1 - 8 of 8
  • Drude, S., Stehouwer, H., Trilsbeek, P., Broeder, D., & Sloetjes, H. (2013). Language documentation and the language archive as e-humanities centrum. Poster presented at the Soeterbeeck eHumanities Workshop, Ravenstein, The Netherlands.
  • Sloetjes, H. (2013). ELAN. Poster presented at the Soeterbeeck eHumanities Workshop, Ravenstein, The Netherlands.
  • Sloetjes, H., Somasundaram, A., Drude, S., Stehouwer, H., & Van de Looij, K. J. (2013). Expanding and connecting the annotation tool ELAN. Talk presented at Digital Humanities Conference 2013. Lincoln, Nebraska. 2013-07-16 - 2013-07-19.

    Abstract

    The annotation tool ELAN allows for adding time-linked textual annotations to digital audio and video recordings. It is applied in various disciplines within the humanities, with linguistics, sign language and gesture research represented most prominently in its user base. This paper highlights new developments in ELAN with an emphasis on those features that introduced new technological and methodological approaches to analysing both audio/video and derived textual data.
  • Sloetjes, H., Somasundaram, A., Stehouwer, H., & Drude, S. (2013). Novel developments in ELAN. Talk presented at the 3rd International Conference on Language Documentation and Conservation (ICLDC), “Sharing Worlds of Knowledge". Honolulu, Hawaii. 2013-02-28 - 2013-03-03.
  • Sloetjes, H., Stehouwer, H., & Drude, S. (2013). Novel developments in Elan. Talk presented at the 23rd Meeting of Computational Linguistics in the Netherlands (CLIN 2013). Enschede, The Netherlands. 2013-01-18.
  • Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2010). Automatic annotation of media field recordings. Talk presented at ECAI 2010 Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH 2010). Lisbon, Portugal. 2010-08-16 - 2010-08-20. Retrieved from http://ilk.uvt.nl/LaTeCH2010/.

    Abstract

    In the paper we describe a new attempt to come to automatic detectors processing real scene audio-video streams that can be used by researchers world-wide to speed up their annotation and analysis work. Typically these recordings are taken in field and experimental situations mostly with bad quality and only little corpora preventing to use standard stochastic pattern recognition techniques. Audio/video processing components are taken out of the expert lab and are integrated in easy-to-use interactive frameworks so that the researcher can easily start them with modified parameters and can check the usefulness of the created annotations. Finally a variety of detectors may have been used yielding a lattice of annotations. A flexible search engine allows finding combinations of patterns opening completely new analysis and theorization possibilities for the researchers who until were required to do all annotations manually and who did not have any help in pre-segmenting lengthy media recordings.
  • Auer, E., Russel, A., Sloetjes, H., Wittenburg, P., Schreer, O., Masnieri, S., Schneider, D., & Tschöpel, S. (2010). ELAN as flexible annotation framework for sound and image processing detectors. Poster presented at Seventh conference on International Language Resources and Evaluation [LREC 2010], Valletta, Malta.

    Abstract

    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities.
  • Kemps-Snijders, M., Koller, T., Sloetjes, H., & Verweij, H. (2010). LAT bridge: Bridging tools for annotation and exploration of rich linguistic data. Talk presented at Seventh conference on International Language Resources and Evaluation [LREC 2010]. Valletta, Malta. 2010-05-19 - 2010-05-21.

    Abstract

    We present a software module, the LAT Bridge, which enables bidirectionalcommunication between the annotation and exploration tools developed at the MaxPlanck Institute for Psycholinguistics as part of our Language ArchivingTechnology (LAT) tool suite. These existing annotation and exploration toolsenable the annotation, enrichment, exploration and archive management oflinguistic resources. The user community has expressed the desire to usedifferent combinations of LAT tools in conjunction with each other. The LATBridge is designed to cater for a number of basic data interaction scenariosbetween the LAT annotation and exploration tools. These interaction scenarios(e.g. bootstrapping a wordlist, searching for annotation examples or lexicalentries) have been identified in collaboration with researchers at ourinstitute.We had to take into account that the LAT tools for annotation and explorationrepresent a heterogeneous application scenario with desktop-installed andweb-based tools. Additionally, the LAT Bridge has to work in situations wherethe Internet is not available or only in an unreliable manner (i.e. with a slowconnection or with frequent interruptions). As a result, the LAT Bridge’sarchitecture supports both online and offline communication between the LATannotation and exploration tools.

Share this page