Schedule of Events | Search Abstracts | Invited Symposia | Symposia | Rising Stars | Poster Sessions | Data Blitz

Poster E18 - Sketchpad Series

Deep Learning–Derived Facial EMG Signatures of Cognitive Workload in Immersive Virtual Reality

Poster Session E - Monday, March 9, 2026, 2:30 – 4:30 pm PDT, Fairview/Kitsilano Ballrooms

Nusrat Choudhury1* (nusrat.choudhury@nrc-cnrc.gc.ca), Zohreh H. Meybodi1*, Francis Thibault1, Budhachandra Khundrakpam1, Joshua A. Granek2, Gino De Luca1; 1National Research Council Canada, Medical Devices, Boucherville, Canada, 2Defence Research & Development Canada, Toronto Research Centre, Toronto, Canada, 3* equal authors

Introduction: Virtual Reality (VR) offers immersive, controlled environments ideal for probing cognitive–emotional states, while facial electromyography (fEMG) enables unobstructed affect sensing in head-mounted displays. We propose a Convolutional Neural Network–Temporal Convolutional Network (CNN-TCN) framework to decode facial expressions and workload from fEMG in VR. Methods: Twelve participants (6F; 40.6±8.9 years) completed emotional, cognitive, physical, and dual-demand VR scenes using the National Research Council Canada's bWell platform. fEMG was recorded with the emteqPRO™ mask. A calibration session, where participants were asked to make intentional expressions (smile, frown, surprise, neutral) was used to train a CNN–TCN model applied to experimental scenes. Preprocessing included tagging, normalization, and sliding-window segmentation. Mixed-effects models related predicted expression features to perceived workload (from the NASA-TLX). Results: Self-reports confirmed distinct workload/mood profiles across scenes (χ²(4)=10.93, p=0.027). Across leave-one-participant-out folds, the model showed robust generalization (test macro-F1≈0.88±0.13; ROC-AUC≈0.95±0.06). Training curves exhibited rapid loss reduction and F1 stabilization, indicating effective optimization without overfitting. PCA of 81 features revealed two latent dimensions: global expression dynamics (PCA1, 24.5% variance) and frown-related tension (PCA2, 17.1%). Mixed-effects models showed reduced expressiveness under cognitive load (β= −3.93, p=0.005) and elevated frown-tension in emotional scenes (β=2.34, p=0.031). Feature-level models demonstrated scene-specific associations with workload: frown metrics tracked mental demand, surprise/eyebrow dynamics tracked frustration and temporal demand, non-neutral dominance tracked physical load, and smile-related dynamics increased under dual demand. These patterns were consistent with FACS interpretations. Conclusion: CNN–TCN-based fEMG decoding yields robust, interpretable markers of affect and workload in VR, scene-independent assessment and reducing reliance on self-report.

Topic Area: EMOTION & SOCIAL: Emotion-cognition interactions

CNS Account Login

CNS_2026_Sidebar_4web

March 7 – 10, 2026