Repository logo
 
Publication

Feasibility of 3D body tracking from monocular 2D video feeds in musculoskeletal telerehabilitation

dc.contributor.authorClemente, Carolina
dc.contributor.authorChambel, Gonçalo
dc.contributor.authorSilva, Diogo C. F.
dc.contributor.authorMontes, António Mesquita
dc.contributor.authorPinto, Joana F.
dc.contributor.authorSilva, Hugo Plácido da
dc.contributor.authorMesquita Montes, António
dc.contributor.authorSilva, Diogo C. F.
dc.date.accessioned2025-05-06T08:27:16Z
dc.date.available2025-05-06T08:27:16Z
dc.date.issued2023-12-29
dc.description.abstractMusculoskeletal conditions affect millions of people globally; however, conventional treatments pose challenges concerning price, accessibility, and convenience. Many telerehabilitation solutions offer an engaging alternative but rely on complex hardware for body tracking. This work explores the feasibility of a model for 3D Human Pose Estimation (HPE) from monocular 2D videos (MediaPipe Pose) in a physiotherapy context, by comparing its performance to ground truth measurements. MediaPipe Pose was investigated in eight exercises typically performed in musculoskeletal physiotherapy sessions, where the Range of Motion (ROM) of the human joints was the evaluated parameter. This model showed the best performance for shoulder abduction, shoulder press, elbow flexion, and squat exercises. Results have shown a MAPE ranging between 14.9% and 25.0%, Pearson’s coefficient ranging between 0.963 and 0.996, and cosine similarity ranging between 0.987 and 0.999. Some exercises (e.g., seated knee extension and shoulder flexion) posed challenges due to unusual poses, occlusions, and depth ambiguities, possibly related to a lack of training data. This study demonstrates the potential of HPE from monocular 2D videos, as a markerless, affordable, and accessible solution for musculoskeletal telerehabilitation approaches. Future work should focus on exploring variations of the 3D HPE models trained on physiotherapy-related datasets, such as the Fit3D dataset, and post-preprocessing techniques to enhance the model’s performance.por
dc.description.sponsorship2022.04901.CEECIND
dc.identifier.citationClemente, C., Chambel, G., Silva, D. C. F., Montes, A. M., Pinto, J. F., & Silva, H. P. da. (2024). Feasibility of 3D body tracking from monocular 2D video feeds in musculoskeletal telerehabilitation. Sensors, 24(1), Artigo 1. https://doi.org/10.3390/s24010206
dc.identifier.doi10.3390/s24010206
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10400.22/30033
dc.language.isoeng
dc.peerreviewedyes
dc.publisherMDPI
dc.relationUIDB/50008/2020
dc.relation.hasversionhttps://www.mdpi.com/1424-8220/24/1/206
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectTelerehabilitation
dc.subjectMusculoskeletal
dc.subject3D Human pose estimation
dc.subjectMediaPipe Pose
dc.subjectROM
dc.subject2D camera
dc.subjectMonocular
dc.subjectVideos
dc.subjectDeep learning
dc.titleFeasibility of 3D body tracking from monocular 2D video feeds in musculoskeletal telerehabilitationpor
dc.typeresearch article
dspace.entity.typePublication
oaire.citation.titleSensors
oaire.citation.volume24
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85
person.familyNameMesquita Montes
person.familyNameSilva
person.givenNameAntónio
person.givenNameDiogo C. F.
person.identifier.ciencia-id9D19-1431-DA3E
person.identifier.ciencia-idAA15-B41A-4DB6
person.identifier.orcid0000-0003-2777-8050
person.identifier.orcid0000-0002-3131-3232
person.identifier.scopus-author-id57190182435
relation.isAuthorOfPublicationbdb9ef9c-0b90-4b92-bc1a-f00ff9f14f8f
relation.isAuthorOfPublication0c275f0b-3366-4968-8f6a-4d30457692d4
relation.isAuthorOfPublication.latestForDiscoverybdb9ef9c-0b90-4b92-bc1a-f00ff9f14f8f

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ART_Diogo Silva 1.pdf
Size:
6.92 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.03 KB
Format:
Item-specific license agreed upon to submission
Description: