Deep learning based Real-time Biomechanics Capture
Arnab Dey
Doctoral Candidate in 2nd year
Abstract
Capturing and tracking high-detail human motion in real-time is a hot research topic that is fundamental to a wide range of applications including e-health, sport performance analysis, human-robot interaction, augmented reality and many more. This multidisciplinary doctoral project aims to work across the domains of real-time computer vision, deep learning and bio-mechanics. The aim is to address the problem of acquiring the pose, shape, appearance, motion and dynamics (torques, forces and velocities) of humans in 3D using multi-camera environment in real-time. One of the major challenges in live motion capture is
the problem of dense modelling of non-rigid scenes.
The objective of this doctoral project will be to design an end-to-end approach such that the input to a training network will be the set of images from multiple cameras observing the scene. The output of the network will be the high detail 3D geometry and dynamics acting on the human body. To this end we aim to use RGB-D sensor-consistency to train the network in an unsupervised manner such that all images transform correctly to every other image with minimal error. For the training phase we will use many sensors, however, the use of the network for reconstructing the bio-mechanics will use much fewer sensors (even potentially with a single sensor). Such a low-cost set-up with a single camera could be used by a medical (or sport) practitioner for diagnosis.
the problem of dense modelling of non-rigid scenes.
The objective of this doctoral project will be to design an end-to-end approach such that the input to a training network will be the set of images from multiple cameras observing the scene. The output of the network will be the high detail 3D geometry and dynamics acting on the human body. To this end we aim to use RGB-D sensor-consistency to train the network in an unsupervised manner such that all images transform correctly to every other image with minimal error. For the training phase we will use many sensors, however, the use of the network for reconstructing the bio-mechanics will use much fewer sensors (even potentially with a single sensor). Such a low-cost set-up with a single camera could be used by a medical (or sport) practitioner for diagnosis.
Supervisors
- Andrew Comport, Researcher, I3S/CNRS
- Tarek Hamel, Professor, I3S/CNRS
- Tutor from Academia
-
Pauline Gerus, Researcher, LAMHESS, Université Côte d'Azur
- Mentor from Industry
- Xavier Bouquet, CEO, Youdome
International 6-months secondment in Australia
in the supervision of Tom Drummond, Professor, University of Melbourne