This dataset (and associated paper under submission) is for training a neural networks to make LV longitudinal strain measurements in the A4C view.
To do this we obtained expert labels for 3 points and 1 curve in the A4C view:
Below is the output (after conversion of heatmaps into discrete points and a traced endocardial border) of Unity-GLS.
Under review
To aid testing new easy-to-use inference code for your own dicoms has been added with instructions.
The code with a README is available here: https://github.com/UnityImaging/unity-gls
This is easier to use than the code below (which is our canonical source).
Test A4C DICOM: test_ac4.dcm
This is a snapshot of the data and code used for this paper. You should use the "latest release" if you are training your own neural network. These snapshots are provided for reproducibility.
The dataset for model development is divided into train, tune, and internal validation sets. There are 7523 videos in this dataset, which include the from 2587 labelled images from 1224 A4C videos (the other videos may be of different views or not completely labelled). The anonymised PNG files are downloaded seperately from the labels.
The dataset for model validation, which comprises of 100 echocardiograms used in the external validation is kept private for competition use.
However, the 600 page appendix (stored here due to size) with the model output on every validation image is here.
The checkpoint used for the paper for Unity-GLS was training run 211, Epoch 400.
A snapshot of the exact code used for the paper is provided for reproducibility. The latest version of the code is available on Unity Imaging GitHub with improvements.
Please use the latest code, models, labels, and data available from the main page if you are building upon our work. A snapshot of all the materials is provided below for reproducibility purposes.
The model weights, labels, and images are available under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license
The code is available under the MIT license.
We are grateful to the following institutions for funding and support
This research and open-access release of the has been conducted under:
Any questions Dr. Matthew Shun-Shin