FLESH | Effort project
This website documents data processing pipeline developed for the study Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game. The study is a part of the project associated with ViCom project On the FLExibility and Stability of Gesture-speecH Coordination (FLESH). The pipeline covers every step from raw data processing to feature extraction and analysis, focusing on extraction of motion and acoustic signals, cleaning and preparation of relevant analyses.

The repository associated with this project can be found on Github.
Current study
We recruited 60 Dutch dyads (i.e., 120 individuals) to participate in a gestural-vocal referential game. One person of the dyad acts as a performer, and one as a guesser. The performer’s role is to express meaning in one of the three conditions: using only voice, only gesture, or both. The guesser’s role is to guess the meaning. If the answer is not correct, the performer has two more chances to repair themselves. In total, each dyad performs 42 different concepts in three modality conditions, and partners swap roles within each condition. We recorded performers’ movement, vocalizations, and postural sway.
Our main research question addressess whether (and how) people become more effortful when they attempt to resolve misunderstanding in a novel-communication gestural-vocal task. Effort-related features of interest include upper limb torque, acoustic amplitude envelope, and center of pressure.
You can read more about the theoretical reasonings in the prereregistration’s introduction.
Two-phase preregistration
This study has been preregistered in two phases.
In phase I, we preregistered the experimental design, laboratory setup and power analysis. The preregistration is available at the OSF Registries.
In phase II, we preregistered the research questions and hypothesis, together with code pipeline covering pre-processing, processing and the analysis itself. The preregistration is available at the OSF Registries.
Updates
[✅] Preregistration of data collection
[✅] Data collection completed
[✅] Preregistration of analysis and processing steps
[] Preprint published
[] Manuscript published
[] Data available at open access repository
Pipeline Overview
This study builds on multistep pipeline that serves to:
- extract the raw data
- process them
- extract relevant features
- analyze with regards to research questions
See Methods section for conceptual overview of the processing and analysis steps.
Note that in this workflow, each step builds on the previous one. However, it is possible to use parts of the workflow for different purposes.
Preprocessing of the raw data
- In Pre-Processing I: from XDF to raw files we load and clean raw XDF data, align streams, and prepare for downstream processing.
Motion Tracking Processing
- In Motion tracking I: Preparation of videos we crop video recordings and prepare them for motion capture.
- In Motion tracking II: 2D pose estimation via OpenPose we use OpenPose for 2D pose estimation.
- In Motion tracking III: Triangulation via Pose2sim we convert 2D coordinates to 3D using pose2sim.
- In Motion tracking IV: Modeling inverse kinematics and dynamics we compute inverse kinematics and dynamics using OpenSim.
Signal Processing
- In Processing I: Motion tracking and balance we clean and interpolate motion signals, and extract derivatives such as speed, acceleration and jerk.
- In Processing II: Acoustics we extract relevant acoustic features.
- In Processing III: Merging multimodal data we merge motion and acoustic time series.
Movement Annotation
- In Movement annotation I: Preparing training data and data for classifier we prepare our multimodal time series for training purposes.
- In Movement annotation II: Training movement classifier, and annotating timeseries data we train and evaluate classifiers for movement detection.
- In Movement annotation III: Computing interrater agreement between manual and automatic annotation we evaluate inter-annotator agreement.
Final Merge
- In Final merge we merge annotations with timeseries.
Concept Similarity
- In Computing concept similarity using ConceptNet word embeddings we assess semantic similarity between concepts using ConceptNet.
Feature Extraction
- In Extraction of effort-related features we extract features from the multimodal time series for modelling purposes.
Exploratory Analysis: Most Predictive Features of Effort
- In Exploratory Analysis I: Using PCA to identify effort dimensions we explore dimensionality of extracted features using Principal Component Analysis.
- In Exploratory Analysis II: Identifying effort-related features contributing to misunderstanding resolution we assess feature importance using eXtreme Gradient Boosting.
Confirmatory Analysis: Statistical Modelling
- In Statistical Analysis: Modelling the Effect of Communicative Attempt (H1) and Answer Similarity (H2) on Effort we build causal and statistical models testing our hypothesis.
Acknowledgements
We would like to thank to all participants of this study. Special thanks also go to the Donders lab coordinator Jiska Koemans and the Donders research integrity officer Miriam Kos. We are especially grateful to the members of Donders Technical Support Group, namely Erik van den Berge, Norbert Hermesdorf, Gerard van Oijen, Maarten Snellen and Pascal de Water, for their invaluable help with the technical setup. Finally, we thank the student assistants and interns in project FLESH - Jet Lambers, Justin Snelders, Gillian Rosenberg, Hamza Nalbantoğlu - who supported this project through their efforts in participant recruitment, data collection, data processing, and annotation.
Contact
Corresponding author: kadava[at]leibniz-zas[dot]de
How to cite
If you want to use and cite and part of the coding pipeline, cite:
Kadavá, Š., Ćwiek, A., & Pouw, W.. (2025). Coding pipeline to the project Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game (Version 1.0.0) [Computer software]. https://github.com/sarkadava/FLESH_Effort
If you want to cite the project, cite
Kadavá, Š., Pouw, W., Fuchs, S., Holler, J., & Aleksandra , Ć. (2025). Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game. OSF Registries. https://osf.io/8ajsg