Workshop "Corpus Platforms & Multimodality: Integrating audiovisual and sensor-based data for research on social interaction""
WorkshopInteracting with Robots and Virtual Agents? Robotic Systems in Situated Action and Social Encounters
Organized by T. Schmidt (IDS Mannheim & U Basel) & K. Pitsch (U Duisburg-Essen)
After the "multimodal turn" in research on social interaction (e.g., Goodwin 2000, Mondada 2014, Deppermann/Streeck 2018) current research on social interaction has begun - also in the humanities - to encompass not only audio and video data, but to include further sensor-based data, such as e.g. data from mobile Eye-Tracking glasses, Motion Capture devices or logfiles from technical instruments (e.g. Kendrick & Holler 2017, Auer 2021, Stukenbrock 2018, Pitsch et al. 2013, Pitsch 2020). Thus, corpus tools and platforms are required which support and integrate these different types of data. Ideally, the different types of data and information should be synchronized on the timeline, displayed online, cross-referenced with transcripts, annotations and metadata, so that they can be used for different forms of qualitative and quantitative analysis. While current corpus tools used in field of social interaction provide comfortable means for dealing with audio-visual data and have developed a coherent infrastructure for audio data (e.g. Schmidt 2016, 2018), extensions are required to include and deal with these novel data types both to support analysis and long-term storage and data reuse.
This small-scale workshop aims at bringing together researchers and developers of corpus platforms for social interaction to present the current state of the art and discuss how such novel data types could best be integrated into existing tools. The following questions will be addressed:
(i) Which requirements do novel research projects on social interaction present?
(ii) How do current corpus platforms address these issues? Which extensions are
required and how could they be realized?
(iii) What could interoperable data structures look like?
Contributors are invited to present solutions and work-in-progress from their own research contexts and to sketch ideas for future developments and improved interoperability.
References
Auer, P. (2021). Turn-allocation and gaze. A multimodal revision of the "current-speakerselects-next" rule of the turn-taking system of conversation analysis. Discourse Studies, 23(2), 117-140.
Deppermann, Arnulf (2018): Sprache in der multimodalen Interaktion. In: Deppermann, Arnulf/Reineke, Silke (Hrsg.): Sprache im kommunikativen, interaktiven und kulturellen Kontext. (=Germanistische Sprachwissenschaft um 2020, Bd. 3). Berlin/Boston: de Gruyter, S. 51-85.
Deppermann, Arnulf/Streeck, Jürgen (Hrsg.) (2018): Time in Embodied Interaction. Synchronicity and sequentiality of multimodal resources. Amsterdam: John Benjamins.
Goodwin, C. (2000). Action and embodiment within situated human interaction. Journal of Pragmatics, 32(10), 1489-1522.
Kendrick, K. H., & Holler, J. (2017). Gaze Direction Signals Response Preference in Conversation. Research on Language and Social Interaction, 12-32.
Mondada, L. (2014). The local constitution of multimodal resources for social interaction. Journal of Pragmatics, 65, 137-156.
Pitsch, K. (2020). Answering a robot’s questions. Participation dynamics of adult-childgroups in encounters with a museum guide robot. Réseaux, 220-221(2-3), 113-150.
Pitsch, K., Neumann, A., Schnier, C., & Hermann, T. (2013). Augmented Reality as a Tool for Linguistic Research: Intercepting and Manipulating Multimodal Interaction. Paperpresented at the Multimodal Corpora: Beyond Audio and Video (IVA 2013 Workshop), Edinburgh, UK, 7 pages.
Schmidt, T. (2016). Construction and Dissemination of a Corpus of Spoken Interaction. Tools and Workflow in the FOLK project. Corpus Linguistic Software Tools. Journal for Language Technology and Computational Linguistics, 31(1), 127-154.
Schmidt, T. (2018). Gesprächskorpora. Aktuelle Herausforderungen für einen besonderen Korpustyp. In M. Kupietz & T. Schmidt (Eds.), Korpuslinguistik (pp. 209-230): de Gruyter.
Stukenbrock, A. (2018). Mobile dual eye-tracking in face-to-face interaction. The case of deixis and joint attention. In G. Brône & B. Oben (Eds.), Eye-tracking in interaction. Studies on the role of eye gaze in dialogue (pp. 265-302). Amsterdam: Benjamins.
Preliminary Schedule: Monday, 04.10.2021 (Zoom)
20 min. presentation (max.) + 10 min. discussion
09:30 – 09:40 |
Karola Pitsch & Thomas Schmidt |
Opening |
09:40 – 10:10 |
Karola Pitsch (Duisburg-Essen) |
Requirements for Corpus Platforms from the Perspective of Research on Social Interaction |
10:10 – 10:40 |
Thomas Schmidt & Elena Frick (IDS Mannheim, Basel) |
Beyond transcription: Thoughts on integrating new data types into workflows and tools for oral corpora |
10:40 – 10:50 |
Coffee Break |
|
10:50 – 11:20 |
Han Sloetjes (MPI Nijmwegen) |
ELAN – current support for time series data |
11:20 – 12:00 |
Sonja Bayer & Doris Bambey (DIPF Frankfurt) |
Process data in educational research |
12:00 – 12:15 |
General Discussion I |
|
12:15 – 13:15 |
Lunch Break |
|
13:15 – 13:45 |
Carole Etienne (CNRS Lyon) |
From multimedia data useful to the researcher to their online exploration and availability in a database |
13:45 – 14:15 |
Rene Tuma, Willi Pröbrock & Hubert Knoblauch (TU Berlin) |
Challenges from the Perspective of Archiving Infrastructures for Audiovisual Data (aviDA Project) |
14:15 – 14:45 |
Andre Krause & Karola Pitsch |
Processing and transformation of eyetracking data for interactional research |
14:45 – 15:30 |
General Discussion II |
|