Context
The EgoMotion workshop focuses on in-the-wild egocentric full-body motion understanding from data captured by wearable devices. We address three major problems for full-body motion -- tracking, synthesis and action recognition. The scope combines several topics and sub-fields that have grown in importance over the last years, and aims to bring these fields together in a common forum. Towards egocentric full-body tracking, motion synthesis and action recognition, we will focus on, but not limited to, algorithms developed for the following inputs.
- Videos - from head-mounted cameras or external cameras.
- Non-visual body-worn sensors - inertial measurement units (IMUs), electromagnetic (EM) sensors, barometers, magnetometers, audio etc.
- Derived data - device trajectories, eye gazes, 3D environment reconstructions, semantic scene representation, text description, narration etc.
In addition to algorithms, the workshop will promote multiple recent datasets and associated challenges to accelerate research in this field.
Invited Speakers
MPI-INF
Stanford University
ETH Zürich
University of Tübingen
Seoul National University
ETH Zurich
NTU
Tshinghua University
Stanford University
Schedule
Time | Event | Speaker |
---|---|---|
13:30 - 13:45 | Opening | Richard Newcombe |
13:45 - 14:15 | Invited Talk |
Siyu Tang Egocentric 3D Human Estimation and Synthesis |
14:15 - 14:45 | Invited Talk |
Christian Theobalt Egocentric Human Motion Capture with Head-mounted Cameras |
14:45 - 15:15 | Invited Talk |
Hanbyul Joo Towards Capturing Everyday Movements to Scale Up and Enrich Human Motion Data |
15:15 - 15:30 | Break | Live demos with Quest and Project Aria |
15:30 - 16:00 | Invited Talk |
Lingni Ma, Yuting Ye Nymeria: Understanding Human Motion from Egocentric Data |
16:00 - 16:30 | Invited Talk |
C. Karen Liu, Jiaman Li Egocentric Perception for Human Motion Synthesis |
16:30 - 17:00 | Invited Talk |
Gerard Pons-Moll How and Why Should We Learn Avatars with Sensorimotor Capabilities? |
17:00 - 17:15 | Break | Live demos with Quest and Project Aria |
17:15 - 17:28 | Deepdive |
Manuel Kaufmann Motion Capture with Electromagnetic Body-worn Sensors |
17:28 - 17:41 | Deepdive |
Fangzhou Hong EgoLM: Multi-Modal Language Model of Egocentric Motions |
17:41 - 17:54 | Deepdive & Live Demo |
Xinyu Yi Egocentric Motion Capture with Sparse Inertial/Visual Sensors |
17:55 - 18:00 | Closing |
Organizers
Meta
Meta
Meta
NTU
Stanford University
University of Tübingen
Meta