teaser cover

the 1st Workshop on

EgoMotion

Egocentric Body Motion Tracking, Synthesis and Action Recognition

 June 18th 13:30 - 18:00 PST

 CVPR 2024, Meeting Room - Summit 429, Seattle WA, USA

Context

The EgoMotion workshop focuses on in-the-wild egocentric full-body motion understanding from data captured by wearable devices. We address three major problems for full-body motion -- tracking, synthesis and action recognition. The scope combines several topics and sub-fields that have grown in importance over the last years, and aims to bring these fields together in a common forum. Towards egocentric full-body tracking, motion synthesis and action recognition, we will focus on, but not limited to, algorithms developed for the following inputs.

  • Videos - from head-mounted cameras or external cameras.
  • Non-visual body-worn sensors - inertial measurement units (IMUs), electromagnetic (EM) sensors, barometers, magnetometers, audio etc.
  • Derived data - device trajectories, eye gazes, 3D environment reconstructions, semantic scene representation, text description, narration etc.

In addition to algorithms, the workshop will promote multiple recent datasets and associated challenges to accelerate research in this field.

promo video from all speakers

Invited Speakers

Christian Theobalt
Christian Theobalt
Professor
MPI-INF
Karen Liu
C. Karen Liu
Professor
Stanford University
Siyu Tang
Siyu Tang
Assistant Professor
ETH Zürich
Gerard Pons-Moll
Gerard Pons-Moll
Professor
University of Tübingen
Hanbyul Joo
Hanbyul Joo
Assistant Professor
Seoul National University
Manuel Kaufmann
Manuel Kaufmann
Postdoc
ETH Zurich
Fangzhou Hong
Fangzhou Hong
Ph.D Student
NTU
Xinyu Yi
Xinyu Yi
Ph.D Student
Tshinghua University
Jiaman Li
Jiaman Li
Ph.D Student
Stanford University

Schedule

Time Event Speaker
13:30 - 13:45 Opening

Richard Newcombe

13:45 - 14:15 Invited Talk

Siyu Tang

Egocentric 3D Human Estimation and Synthesis

14:15 - 14:45 Invited Talk

Christian Theobalt

Egocentric Human Motion Capture with Head-mounted Cameras

14:45 - 15:15 Invited Talk

Hanbyul Joo

Towards Capturing Everyday Movements to Scale Up and Enrich Human Motion Data

15:15 - 15:30 Break  Live demos with Quest and Project Aria
15:30 - 16:00 Invited Talk

Lingni Ma, Yuting Ye

Nymeria: Understanding Human Motion from Egocentric Data

16:00 - 16:30 Invited Talk

C. Karen Liu, Jiaman Li

Egocentric Perception for Human Motion Synthesis

16:30 - 17:00 Invited Talk

Gerard Pons-Moll

How and Why Should We Learn Avatars with Sensorimotor Capabilities?

17:00 - 17:15 Break  Live demos with Quest and Project Aria
17:15 - 17:28 Deepdive

Manuel Kaufmann

Motion Capture with Electromagnetic Body-worn Sensors

17:28 - 17:41 Deepdive

Fangzhou Hong

EgoLM: Multi-Modal Language Model of Egocentric Motions

17:41 - 17:54 Deepdive
& Live Demo

Xinyu Yi

Egocentric Motion Capture with Sparse Inertial/Visual Sensors

17:55 - 18:00 Closing  

Organizers

Lingni Ma
Lingni Ma
Research Scientist
Meta
Richard Newcombe
Richard Newcombe
VP, Research Scientist
Meta
Yuting Ye
Yuting Ye
Research Scientist
Meta
Ziwei Liu
Ziwei Liu
Assistant Professor
NTU
Yifeng Jiang
Yifeng Jiang
Ph.D Student
Stanford University
Vladimir Guzov
Vladimir Guzov
Ph.D Student
University of Tübingen
Edward Miller
Edward Miller
Research Product Manager
Meta