Friday, July 4, 2025

Artificial intelligence will facilitate clinically important shoulder arthroplasty research

Today's PubMed search revealed 323 publications containing the words "artificial," "intelligence," and "shoulder."

I'm going to focus on those that relate to the computer vision/AI analysis of plain shoulder radiographs in patients having shoulder arthroplasty because:

*we are still struggling to understand the relationship of pre- and post-arthroplasty glenohumeral anatomy on postoperative shoulder comfort and function, satisfaction, complications, and revision rates.

*preoperative and postoperative plain x-rays are the most commonly used imaging modalities for comparing the pathoanatomy to the post arthroplasty anatomy of the shoulder

*manual measurements are tedious and time-consuming to the extent that they prohibit large scale and multicenter studies of the relationship of pre and post arthroplasty anatomy to clinical outcome

*manual measurements are observer-dependent so that inter- and intra-observer variability are high

*Artificial intelligence/computer vision is an exploding area of innovation so that the dream of having a freely accessible, robust algorithm for large scale, rapid, observer-independent measures of shoulder anatomy before and after arthroplasty will soon become a reality.


Here is a summary of some of the articles grouped by topic  

Segmentation and classification

Not a shoulder reference, but relevant, A Stepwise Approach to Analyzing Musculoskeletal Imaging Data With Artificial Intelligence provides an overview of AI and sets out steps including (1) project definition, (2) data handling, (3) model development, (4) performance evaluation, and (5) deployment into clinical care. As shown with the example of a hip, the AI model can classify images, detect objects, segment objects, and generate images for analysis



Deep learning to automatically classify very large sets of preoperative and postoperative shoulder arthroplasty radiographs had the goal of avoiding the laborious process of (1) manually observing and recording imaging information and (2 the lack of standard methods for transferring this information to a registry. The authors used a cohort of 2303 shoulder radiographs from 1724 shoulder arthroplasty patients. Two observers did a huge amount of manual work in labeling each radiograph according to (1) laterality (left or right), (2) projection (anteroposterior, axillary, or lateral), and (3) whether the radiograph was a preoperative radiograph or showed an anatomic total shoulder arthroplasty or a reverse shoulder arthroplasty. 




These data were used to train and test an automatic algorithm. 



The trained algorithm perfectly classified laterality and almost perfectly classified the imaging projection and the implant type. It took the algorithm 20.3 seconds to analyze 502 images. The authors also identified the features that the model used to predict the correct label for each task (see green dots used to predict prosthesis type).



Implant identification

Automated Shoulder Implant Manufacturer Detection using Encoder Decoder based Classifier from X-ray Images proposed an encoder-decoder based classifier along with the supervised contrastive loss function that can identify the implant manu- facturer effectively with accuracy of 92% from X-ray images.

Implant identification will be an important component of arthroplasty outcome research. 

As in the prior study, Artificial intelligence for automated identification of total shoulder arthroplasty implants sought to develop an automated deep learning algorithm to identify shoulder arthroplasty implants from 3060 plain radiographs of patients having total shoulder arthroplasty (22 different reverse TSA and anatomic TSA prostheses from 8 implant manufacturers). The algorithm classified implants at a mean speed of 0.079 seconds per image. The model discriminated among the implants with an accuracy of 97%, and sensitivities between 0.80 and 1.00 on an independent testing set. 




Artificial Intelligence-Based Recognition of Different Types of Shoulder Implants in X-ray Scans Based on Dense Residual Ensemble-Network for Personalized Medicine proposes a deep learning-based framework to classify shoulder implants in X-ray images. 

Examples showing high intra-class variabilities and low inter-class variabilities. Examples showing (a) high intra-class variability of one manufacturer (Cofield) and (b) low inter-class variability. In (b), upper-left, upper-right, lower-left, and lower-right images show the cases of four manufacturers of Cofield, Depuy, Tornier, and Zimmer, respectively.

The authors used a rotational invariant augmentation (manipulating the images to create "new views"), to increase the size of the training dataset by 36-fold. 

 Examples of rotational invariant augmentation (RIA).

The authors then created a dense residual ensemble-network (DRE-Net).  Their experimental results showed that DRE-Net achieved an accuracy, F1-score, precision, and recall of 85.92%, 84.69%, 85.33%, and 84.11%. They then demonstrated the generalization capability of their network and the effectiveness of rotational invariant augmentation by testing in an open-world configuration, 

Specific measurements

Artificial Intelligence to Automatically Measure on Radiographs the Postoperative Positions of the Glenosphere and Pivot Point After Reverse Total Shoulder Arthroplasty reminds that radiographic evaluation of the implant configuration after reverse total shoulder arthroplasty is a time-consuming task that is frequently subject to interobserver disagreement. The authors'  goal was to automatically measure the postoperative radiographic location of the glenosphere center of rotation (GCR) and the pivot point (PP) in reference to the scapula in 417 primary rTSA postoperative anteroposterior radiographs. 

Five measurements were manually performed by 3 observers: (1) the medial position and (2) the inferior position of the geometric center of rotation of the glenosphere (glenosphere center of rotation medialization [GCRm] and glenosphere center of rotation inferiorization [GCRi], respectively) relative to the most lateral aspect of the inferior acromion, as well as (3) the projection of the PP to GCR vector on the fossa line (PP projection), (4) the distance between GCR and glenoid (GCR-glenoid distance), and (5) the overall glenoid lateral offset (GLO). Subsequently, a deep learning algorithm was developed to automatically segment the radiograph and perform the same measurements described above.

The figure below illustrates the measurements used in this study. 

A)  example postoperative radiograph. 

B) The white solid line indicates the supraspinatus fossa line (SFL), which is translated to a white dashed line to identify the lateral border of the acromion (purple point). The glenosphere center of rotation medialization and inferiorization (GCRm and GCRi) are represented using the yellow line and the blue line, respectively. 

C) The vector between the GCR and pivot point (PP; red point) is identified and projected on the SFL to calculate PP projection (red line).

 D) the distance from glenoid-glenosphere interface (GG interface; orange point) to the GCR is calculated as GCR-glenoid distance (purple line), and the glenoid lateral offset (GLO; cyan line) is calculated as the distance from the GG interface to the lateral intersection of the center screw line and the glenosphere circle (blue point)




 The figures below illustrate the segmentation and the methods to find bony and implant markers according to the segmentation of each radiograph for angle calculation. 




The figure below shows (A) The most lateral border of the acromion is found at the most lateral acromion pixel along the supraspinatus fossa line. (B) The most lateral border of greater tuberosity was found at the most distant tuberosity pixel from the humerus shaft axis. (C) The most superior border of the greater tuberosity was found as the most distant tuberosity pixel from the shown assistant line (red dash), which is perpendicular to the shaft axis and passes through the most lateral border of tuberosity. (D) The superior glenoid tubercle was found as the most distant point of the superior glenoid from the tip of the center screw. (E) The supraspinatus fossa line and center screw line are found using the proposed line fitting method. (F) The humeral tray line (red dash) is found using the line fitting method, and a perpendicular line is defined to calculate the Humeral Alignment Angle (HAA).




The developed DL algorithm automatically measured the location of the glenosphere geometric center of rotation and the location of the PP on postoperative radiographs obtained after primary rTSA. Agreement between DL-derived measures and those from human observers was high. The DL algorithm automatically analyzed each testing image in 2 seconds.


Artificial intelligence to automatically measure glenoid inclination, humeral alignment, and the lateralization and distalization shoulder angles on postoperative radiographs after reverse shoulder arthroplasty Points out that in reverse shoulder arthroplasty (RSA), the final configuration is a combination of implant features and surgical execution. Evaluation of the implant configuration is time-consuming and subject to interobserver disagreement.  

The authors sought to develop an AI algorithm to automatically measure glenosphere inclination, humeral component inclination, and the lateralization and distalization shoulder angles (DSAs) on postoperative anteroposterior radiographs after implantation of 143 RSAs. 

Four angles were analyzed: (1) glenoid inclination angle (GIA, between the central fixation feature of the glenoid and the floor of the supraspinatus fossa), (2) humeral alignment angle (HAA, between the long axis of the humeral shaft and a perpendicular to the metallic bearing of the prosthesis), (3) DSA, and (4) lateralization shoulder angle (LSA). 

This figure below shows the 4 shoulder angles that were measured in this study. (A) Lateral Shoulder Angle (LSA), (B) Distalization Shoulder Angle (DSA), (C) Glenoid Inclination Angle (GIA), and (D) Humeral Alignment Angle (HAA). The red dashed line represents the orientation of the humeral tray.


As in the prior study, segmentation model was trained to segment bony and implant elements. The AI algorithm automatically measured the GIA, HAA, LSA, and DSA on postoperative anteroposterior radiographs. The authors found  a high degree of agreement with the manual measurements. It only took the model 1.3 seconds to analyze 1 uploaded image and display the visual annotation and measured values of the angles.

Deep learning model for measurement of shoulder critical angle and acromion index on shoulder radiographs points out that several bone morphological parameters, including the anterior acromion morphology, the lateral acromial angle, the coracohumeral interval, the glenoid inclination, the acromion index (AI), and the shoulder critical angle (CSA), may correlate with the presence of rotator cuff tears and glenohumeral osteoarthritis. The authors accessed normal shoulder radiographs from a large musculoskeletal radiograph dataset. These were annotated by an experienced orthopedic surgeon. The annotated images were divided into train (1004), validation (174), and test (93) sets. The mean absolute error for CSA and AI between human-performed and machine-performed measurements on the test set was 1.68° and 0.03, respectively. The DL tool may be a tool that can help answer whether, in fact, CSA and AI are of clinical importance.

An accelerated deep learning model can accurately identify clinically important humeral and scapular landmarks on plain radiographs obtained before and after anatomic arthroplasty reports on the use of artificial intelligence, specifically computer vision and deep learning models (DLMs), in determining the accuracy of DLM-identified and surgeon identified (SI) landmarks before and after anatomic shoulder arthroplasty.  


240 true anteroposterior radiographs were annotated using 11 standard osseous landmarks to train a deep learning model. Each radiograph was modified to provide a training model consisting of 2,260 imagesThe mean deviation between DLM vs. SI humeral landmarks was 1.9 mm. Scapular landmarks had slightly lower deviations compared to humeral landmarks (1.5  mm vs. 2.1  mm). The DLM was found to be accurate with respect to 14 measures of scapular, humeral, and glenohumeral measurements with a mean deviation of 2.9 mm. 

Development of a deep learning model can be accelerated by manipulation of a small number of original images to achieve a substantial learning set. The reliability and efficiency of this deep learning model represents a potentially powerful tool for analyzing large numbers of preoperative and postoperative radiographs while avoiding human observer bias.

Can computer vision / artificial intelligence locate key reference points and make clinically relevant measurements on axillary radiographs? demonstrates a trained and validated machine learning tool that identified key reference points and determined glenoid retroversion and glenohumeral relationships on axillary radiographs. Standardized pre and post arthroplasty axillary radiographs were manually annotated locating six reference points and used to train a computer vision model that could identify these reference points without human guidance. The model then used these reference points to determine humeroglenoid alignment in the anterior to posterior direction and glenoid version. 





The model's accuracy was tested on a separate test set of axillary images not used in training, comparing its reference point locations, alignment and version to the corresponding values assessed by two surgeons.

On the test set of pre- and post-operative images not used in the training process, the model was able to rapidly identify all six reference point locations to within a mean of 2 mm of the surgeon-assessed points. The mean variation in alignment and version measurements between the surgeon assessors and the model was similar to the variation between the two surgeon assessors. Observer-independent approaches have the potential to enable efficient human observer-independent assessment of shoulder radiographs, lessening the burden of manual x-ray interpretation and enabling scaling of these measurements across large numbers of patients from multiple centers so that pre and postoperative anatomy can be correlated with patient reported clinical outcomes.


Image analysis


Sleeping lady 
Leavenworth, Washington
May 2025

You can support cutting edge shoulder research that is leading to better care for patients with shoulder problems, click on this link

Follow on twitter/X: https://x.com/RickMatsen
Follow on facebook: https://www.facebook.com/shoulder.arthritis
Follow on LinkedIn: https://www.linkedin.com/in/rick-matsen-88b1a8133/

Here are some videos that are of shoulder interest
Shoulder arthritis - what you need to know (see this link).
How to x-ray the shoulder (see this link).
The ream and run procedure (see this link).
The total shoulder arthroplasty (see this link).
The cuff tear arthropathy arthroplasty (see this link).
The reverse total shoulder arthroplasty (see this link).
The smooth and move procedure for irreparable rotator cuff tears (see this link)
Shoulder rehabilitation exercises (see this link).