From Pixels to Posture: BIOTONIX's Breakthrough in AI-driven Anatomical Detection

Introduction 

Identifying objects and their points of interest in raw images is an arduous task, time-consuming and can require very specialized skills. This task represents a first step in the interpretation of images and their quantitative analysis (Lindner 2017). 

An autonomous system that can perform such a task accurately and reliably could have significant value in various domains of scientific investigation, such as in morphometric analysis, statistical shape analysis, pose estimation, 3D reconstruction, but also commercially.

To name some examples in industry:

Modern medicine and physiotherapy heavily rely on accurate identification of anatomical points (depending on the application area, several synonymous for point are used, such as marker, landmark, keypoint, etc.) for diagnosis, treatment, and rehabilitation (e.g., Ghesu et al. 2016, Fig. 1). 

Figure 1 – Landmarks detected in medical images. Source: Ghesu et al. 2016

Fashion and tailoring industries can leverage this technology to take accurate biometric measurements. By identifying key anatomical points, designers can create clothes that fit better, reducing the number of returns and increasing customer satisfaction. It could also revolutionize online shopping, allowing for virtual try-ons with precise size recommendations (e.g., Liao et al. 2023, Fig. 2). 

Figure 2 – Body landmark detection for customized tailor services. Source: Liao et al. 2023

Ergonomics and workplace design rely crucially on the understanding of the individual's anatomy. Precise measurements can inform the design of office furniture, tools, and workspaces that reduce the risk of musculoskeletal problems and increase productivity (e.g., Kim et al. 2021, Fig. 3). 

Figure 3 – Skeletal tracking using 3 different systems. Source: Kim et al. 2021

Over the years, several ingenious methods of automatic image annotation, whether those for segmentation or detection the object or its points of interest (e.g. feature point detection, groupwise image registration, thresholding, edge detection, sliding window approach), have been developed. These traditional methods require handcraft feature definition, which makes them less flexible to variability in image conditions (O’Mahony et al. 2020). 

Methods based on machine learning (ML) such as deep learning (DL) have revolutionized this field in recent years. Object detection, semantic segmentation and keypoint detection are already standard tasks in computer vision that benefit from DL, however, they require a large volume of annotated data, which is not always available, as this accumulation takes time and can be very expensive.

The Evolution of Posture Assessment: From Visual to Digital

Although some more objective methods for assessing the standing posture using photographs (e.g., Kraus & Eisenmenger-Weber 1945; MacEwan et al. 1932were proposed since the first half of the 20th century, before the advent of modern digital technology, posture assessment predominantly relied on anamnestic examination and visual observations by practitioners, normally based on qualitative observation of the curvatures of the spine and of body misalignment with respect to the plumb line in anterior, posterior and lateral views (Iunes et al. 2009). While this method provided valuable insights, it is inevitably subjective and depends largely on the individual expertise of the assessor. 

With the popularization of digital photographic cameras in the 1990s, the field saw a significant advancement. Photographic systems for evaluating static posture emerged as a new standard, paving the way for quantitative methods of assessment. These systems sought to reduce the inherent subjectivity of visual evaluations and provide a more standardized, repeatable measurement process.

Zonnenberg et al. (1996) highlighted the potential of these methods by focusing on their intra/interrater reliability, showcasing how digital tools could offer consistency in measurements on body posture photographs (Fig. 4). However, to be able to quantitatively assess posture deviations, accurate identification of specific anatomical points on the human body is necessary.

Figure 4 – A basic photographic set-up for quantitative posture analysis. Source: Zonnenberg et al. (1996)

The Pioneering Steps of BIOTONIX

As a pioneer in the development of photographic systems for quantitative posture assessment, the history of BIOTONIX, which emerged in the early 2000s, is intertwined with the history of scientific development in this area (Guimond et al. 2003). Central to BIOTONIX's system was the paradigm established by Kendall and collaborations (Kendall et al. 2005). 

Instead of visually estimating deviations from the plumb line alignment, as prescribed by these authors, the photogrammetric marker-based BIOTONIX system represented an innovation, as it provided a way to quantify these deviations through recording images and of the coordinates of anatomical reference points in metric units.

While innovative and effective for its time, this approach was not without its challenges. The reliance on manual palpation meant there were inherent variations between assessments, influenced by the individual expertise and experience of the assessing physiotherapist.

One of the standout features of the original BIOTONIX system was its attention to detail in the photography phase. To ensure the anatomical points were clearly visible and distinguishable in the photos, surface fluorescent markers were used (Fig. 5). When the camera's flash was activated, these markers intensely reflected the light, creating high-contrast points on the captured images.

Figure 5 – The retroreflective marker sphere and application sticker assembly (left) and the retroreflective marker sticker construction (right). Source: Guimond et al. 2003

The stark contrast ensured that the anatomical points were unmistakable, reducing the chances of misinterpretation during the evaluation phase. This was especially crucial given the number of points being assessed and the need for high precision in determining postural alignment.

Utilizing classic computer vision algorithms, the software scanned the images, detecting and recording the coordinates of each high-contrast anatomical point. This automation not only expedited the process but also introduced a new level of consistency and accuracy to the procedure. With the coordinates recorded, the software could then compute postural deviations, comparing the observed postural alignment with the predefined ideal.

The innovative approach and methodology behind the original BIOTONIX system was patented in 2003 by Sylvain Guimond, PhD and collaborators (Guimond et al. 2003). Titled "System and Method for Automated Biomechanical Analysis and the Detection and Correction of Postural Deviations," this patent stands as a testament to the pioneering spirit and technical achievements behind the system at that time. In the following years, different research groups validated the system's reliability (Harrison et al. 2008, 2007; Janik et al. 2007; Normand et al. 2007, 2002). 

This patented methodology set a new standard in the world of biomechanical analysis, marking a shift from purely manual methods to more automated, standardized approaches, such as DIPA (Furlanetto et al. 2012) and SAPO (Ferreira et al. 2011).  

The Problem: The Challenge of Anatomical Point Identification for Posture Assessment

Although photographic methods for quantitative posture analysis have represented a considerable advance in clinical practice, the identification of specific anatomical points on the patient's body before taking photographs was still a task based on visual inspection and palpation, which is time consuming and dependent on the expertise of the practitioner.

With the advancement of technology, a game-changing solution emerges: an AI-based system hyper-specialized in identifying anatomical points in digital photos.

BIOTONIX 2.0 : Adopter l’Avenir avec l’Autonomie Alimentée par l’IA

Building on its rich history and proven expertise, BIOTONIX embarked on an ambitious journey starting 2020. With the support of the Canadian federal government's Recherche scientifique et développement expérimental (RS&DE) du gouvernement fédéral canadien, the company set its sights on pioneering a new era of postural assessment.

BIOTONIX decided to develop an autonomous postural assessment system, one that would seamlessly bypass the need for human intervention. To achieve this, BIOTONIX turned to advanced artificial intelligence (AI) techniques, seeking to harness the power of AI for unparalleled precision and efficiency.

Why the Shift to AI?

While the original BIOTONIX system was revolutionary for its time, there were clear areas ripe for improvement:

  • Time Efficiency : The manual marking process, though meticulous, was time-consuming.
  • Consistency : Human intervention, despite best efforts, introduced variables that could affect the outcome. 
  • Scalability : An autonomous system offers potential for broader applications without being limited by manual processes. 

AI, with its ability to rapidly process data, learn from vast datasets, and make precise identifications, presented a solution to these challenges. By integrating AI, BIOTONIX aimed to redefine postural assessment, making it faster, more consistent, and universally applicable.

Harnessing Data: The Foundation of BIOTONIX's AI Revolution

In the world of AI, data is paramount. Recognizing this, BIOTONIX leveraged an invaluable asset: a carefully curated database containing thousands of postural assessments. This vast collection of data provided a goldmine of information, laying the foundation for the development of AI-driven postural assessment techniques.

Training Specialized AI Models

Using this extensive database, BIOTONIX set about training highly specialized AI models. The goal was to teach the AI to identify precise anatomical points in photos containing humans in a relaxed upright position.

Data was carefully annotated and used through iterative training processes, the model learned to discern subtle nuances, recognize patterns, and accurately pinpoint the crucial anatomical points. The AI's predictions were not only accurate but also consistent across a diverse range of postures and body types.

The set of BIOTONIX anatomical markers. Source: Guimond et al. 2003

Anatomical Points Recognized in the Right Lateral View by BIOTONIX's AI

The right lateral view provides a unique perspective, especially when assessing the alignment of the sagittal plane. BIOTONIX's AI system, with its commitment to a comprehensive assessment, identifies 9 critical anatomical points for this view (Fig. 6):

  • Tragus of the Right Ear (SD01) 
  • Glabella (SD02) 
  • Middle of the Chin (SD03) 
  • Right Shoulder Over the Acromion (SD04) 
  • Right Posterior Superior Iliac Spine & Right Anterior Superior Iliac Spine (SD05 and SD08) 
  • Greater Trochanter (SD09) 
  • Gerdy's Tubercle (SD10) 
  • Transverse Tarsal Joint (SD11) 

Figure 6 – Example of AI detection of BIOTONIX markers in right lateral view. Source: BIOTONIX Posture

Anatomical Points Recognized in the Frontal View by BIOTONIX's AI

Grounded in the analytical paradigm set by Kendall et al. (2005), the BIOTONIX system's frontal view model is designed to recognize 16 specific anatomical points (Fig. 7). Each of these points plays a pivotal role in understanding and assessing postural alignment in the frontal plane:

  • Glabella (FA02) 
  • Middle of the Chin (FA04) 
  • Right and Left Shoulders Over the Acromion (FA05 and FA07) 
  • Jugular Notch (FA06) 
  • Umbilicus (FA08) 
  • Right and Left Anterosuperior Iliac Spine (FA09 and FA11) 
  • Right and Left Wrists Over the Styloid Process of the Radium (FA12 and FA13) 
  • Right and Left Patella (FA14 and FA15) 
  • Centered Between the Right and Left Medial and Lateral Malleoli (FA16 and FA18) 
  • Anterior Aspects of the Right and Left Distal Phalanx of the Great Toe (FA19 and FA20) 

Figure 7 – Example of AI detection of BIOTONIX markers in frontal view. Source: BIOTONIX Posture

Anatomical Points Recognized in the Back View by BIOTONIX's AI

Building on its analytical foundation, BIOTONIX's AI system for the back view targets 14 specific anatomical points essential for a thorough postural assessment (Fig. 8). These points provide insights into the alignment of the spine, shoulders, pelvis, and lower limbs:

  • Spinous Process of the 7th Cervical Vertebra (FP03) 
  • Right and Left Shoulders Over the Acromion (FP04 and FP05) 
  • Spinous Process of the 5th Thoracic Vertebra (FP06) 
  • Right and Left Posterior Superior Iliac Spine (FP09 and FP07) 
  • Right and Left Wrists Over the Styloid Process of the Ulna (FP11 and FP10) 
  • Center of the Right and Left Popliteal Cavity (FP13 and FP12) 
  • Right and Left Achilles Tendon at the Level of the Medial Malleolus (FP16 and FP14) 
  • Calcaneus of the Right and Left Foot (FP19 and FP17) 

Figure 8 – Example of AI detection of BIOTONIX markers in back view. Source: BIOTONIX Posture

The system also automatically and with high precision detects the four calibration markers located on the panel installed behind the subject in the clinical environment setup.

Diverse Applications for a Modern Age 

BIOTONIX's innovative approach transcends just a standalone model. The trained AI to detect anatomical points has been integrated into a broader ecosystem, tailored for different user experiences and applications:

  • BTX APP : Taking postural assessment mobile, the BTX App harnesses the power of BIOTONIX's AI for users on-the-go. With a user-friendly interface and real-time assessment capabilities, it offers the perfect blend of convenience and technology.
  • BIOTONIX Posture: For a more comprehensive experience, the BIOTONIX Posture web application provides in-depth analysis, tracking, and reporting functionalities. It offers a robust platform for detailed postural evaluations and recommendations. 

Conclusion 

From pioneering postural assessment systems in the early 2000s to leading the AI revolution in biomechanical analysis, BIOTONIX remains at the forefront of innovation. With its suite of applications and cutting-edge AI capabilities, it promises a future where postural health is accessible, accurate, and seamlessly integrated into our digital lives.

Stay tuned! With BIOTONIX, the future promises innovation, precision, and a relentless pursuit of excellence.

To learn more about our AI system or to see it in action, https://biotonix.com/en/. Interested in integrating it into your practice? Reach out to our team at https://biotonix.com/en/contactez-nous/.

References: 

AAOS, P.C., 1947. Posture and its Relationship to Orthopaedic Disabilities. A Report of the Posture Committee of the American Academy of Orthopaedic Surgeons. 

Ferreira, Elizabeth A., et al. « Quantitative assessment of postural alignment in young adults based on photographs of anterior, posterior, and lateral views. » Journal of manipulative and physiological therapeutics 34.6 (2011): 371-380. 

Furlanetto, Tássia Silveira, et al. « Validating a postural evaluation method developed using a Digital Image-based Postural Assessment (DIPA) software. » Computer methods and programs in biomedicine 108.1 (2012): 203-212. 

Ghesu, Florin C., et al. « An artificial agent for anatomical landmark detection in medical images. » Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part III 19. Springer International Publishing, 2016. 

Guimond, Sylvain, et al. « System and method for automated biomechanical analysis and the detection and correction of postural deviations. » U.S. Patent No. 6,514,219. 4 Feb. 2003. 

Harrison, Deed E., et al. « Upright static pelvic posture as rotations and translations in 3-dimensional from three 2-dimensional digital images: validation of a computerized analysis. » Journal of manipulative and physiological therapeutics 31.2 (2008): 137-145. 

Harrison, Deed E., et al. « Validation of a computer analysis to determine 3-D rotations and translations of the rib cage in upright posture from three 2-D digital images. » European Spine Journal 16 (2007): 213-218. 

Iunes, D. H., et al. « Comparative analysis between visual and computerized photogrammetry postural assessment. » Brazilian Journal of Physical Therapy 13 (2009): 308-315. 

Janik, Tadeusz J., et al. « Validity of a computer postural analysis to estimate 3-dimensional rotations and translations of the head from three 2-dimensional digital images. » Journal of manipulative and physiological therapeutics 30.2 (2007): 124-129. 

Kendall, Florence Peterson, et al. Muscles: testing and function with posture and pain. 5th Ed. Baltimore, MD: Lippincott Williams & Wilkins, 2005. 

Kim, Woojoo, et al. « Ergonomic postural assessment using a new open-source human pose estimation technology (OpenPose). » International Journal of Industrial Ergonomics 84 (2021): 103164. 

Kraus, Hans, and S. Eisenmenger-Weber. « Evaluation of posture based on structural and functional measurements. » Physical Therapy 25.6 (1945): 267-271. 

Liao, Iman Yi, Eric Savero Hermawan, and Munir Zaman. « Body landmark detection with an extremely small dataset using transfer learning. » Pattern Analysis and Applications 26.1 (2023): 163-199. 

Lindner, Claudia. « Automated image interpretation using statistical shape models. » Statistical shape and deformation analysis. Academic Press, 2017. 3-32. 

MacEwan, Charlotte G., and Eugene C. Howe. « An objective method of grading posture. » Research Quarterly. American Physical Education Association 3.3 (1932): 144-157. 

Normand, Martin C., et al. « Three dimensional evaluation of posture in standing with the PosturePrint: an intra-and inter-examiner reliability study. » Chiropractic & Osteopathy 15 (2007): 1-11. 

Normand, Martin C., et al. « Reliability and measurement error of the biotonix video posture evaluation system—part I: inanimate objects. » Journal of manipulative and physiological therapeutics 25.4 (2002): 246-250. 

O’Mahony, Niall, et al. « Deep learning vs. traditional computer vision. » Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 1. Springer International Publishing, 2020. 

Zonnenberg, A. J. J., et al. « Intra/interrater reliability of measurements on body posture photographs. » Cranio® 14.4 (1996): 326-331. 

Share this article