Introduction
Balanced steady state free precession (bSSFP) cine is the major cine sequence in clinical imaging due to its high signal to noise ratio (SNR) and contrast to noise ratio (CNR) [
1,
2]. However, two types of artifacts—banding artifacts and flow artifacts—are common in bSSFP cine and may severely degrade the image quality [
3]. Banding artifacts appear as dark bands in the image, and can be reduced by shimming, frequency scout, and phase cycling [
4‐
15]. Among them, phase cycling is highly effective, but it also increases the scan time and has received limited clinical use. Furthermore, phase cycling often shifts the off-resonance into the flow regions, inducing new flow artifacts that would be otherwise absent. Flow artifacts can be induced by several causes, among which the most common one is the out-of-slice signals of spins flowing through a dark band. In this case, flow artifacts often cause spurious hyperenhancement along the phase-encoding direction and obscure neighboring tissues [
16‐
18]. Flow artifacts can be suppressed by flow-compensation gradients [
19], partial dephasing [
20,
21], and slice-encoding [
22].
Although the methods above can suppress the two artifacts, most of them require sequence modifications [
23]. On the other hand, deep learning-based post-processing methods have received little attention for this application despite its outstanding performance in suppression of other artifacts [
24‐
29]. Thus far, deep learning has only been applied to suppress banding artifacts in non-cine bSSFP imaging [
30]. Removing banding artifacts for bSSFP cine imaging is more challenging because acquisition of cine movie labels free of both banding and flow artifacts is more difficult. Moreover, deep learning is often criticized for its lack of interpretability [
31]. When artifacts are removed, whether the removal was truly based on recognition of the artifacts often remains unknown for a simple neural network.
In this study, we sought to develop a partially interpretable dual-stage network to jointly suppress banding and flow artifacts in non-phase-cycled bSSFP cine imaging (i.e. regular bSSFP cine). Since the method does not require acquisition of phase-cycled data in the testing stage, the method is a post-processing technique. Interpretability of the network was improved by using two cascaded U-Nets, in which the first U-Net [
32] recognizes the location and type of artifacts, and the second U-Net suppresses the artifacts with the guidance of the first U-Net. The dual-stage network was trained using a phase-cycling method tailored to improve the balance between suppression of banding artifacts and promotion of flow artifacts. Evaluation was performed with both healthy subjects and patients using a variety of sequence parameters.
Discussion
In this work, we propose a partially interpretable dual-stage neural network for joint suppression of banding and flow artifacts in non-phase-cycled bSSFP cine. As a post-processing technique, the method reduces banding and flow artifacts relative to traditional cine without modifying the sequence. In addition, the proposed method does not provoke new flow artifacts due to the involvement of large frequency offsets, which is a problem for traditional full-range phase cycling. The VI stage of the network not only identifies where and which type of artifacts is present, but also explains why the network modifies the original image in the corresponding manner. In a busy clinical environment, clinicians may not have enough time to check for artifacts in every cine frame and slice. Owing to its partial interpretability and fast processing, the proposed network can be easily deployed in a clinical environment to both alert clinicians about the presence of artifacts and suppress them to improve the image quality.
Performance of the method in suppressing the two artifacts is largely driven by two factors in training of the network. Firstly, we developed a novel approach, which is the short-range phase cycling method, to obtain training labels. Prior to this work, how to jointly suppress banding and flow artifacts in bSSFP cine remains an open question. The challenge is that although phase cycling can well suppress banding artifacts, it also promotes flow artifacts in the image, which are difficult to completely suppress [
14]. Several methods have been proposed, yet no one has been commonly used in practice [
23]. Our data suggest that the proposed short-range phase cycling method can well address this issue. Another driver of the performance is the inclusion of 12 frequency offsets that densely cover the whole 2π range of the bSSFP spectral period [
1]. Since each frequency is associated with different banding and flow artifact patterns, inclusion of them improves the diversity of the training data and generalizability of the method.
The artifact identification of the VI sub-network can be viewed as a “soft classification” task, where the label values are not binary but vary continuously between 0 and 1. Soft classification strategies have been also used in other computer vision tasks [
33]. For our task, the adoption of
\(\mathrm{sigmoid}(\mathrm{SPC}\_\mathrm{label}/\mathrm{original}\_\mathrm{cine}-1)\) as a natural label for VI sub-network provides an objective and sensitive label for artifact identification. Consequently, the VI sub-network can even detect the zippers of the flow artifacts in the background, which is difficult to detect even by human observers. However, this label implicitly assumes that flow artifacts are always bright and banding artifacts are always dark. While this assumption is usually true, flow artifacts can also cause signal loss [
16]. For those hypoenhanced flow artifacts, however, the network would consider them as banding artifacts, introducing a potential bias to the artifact identification.
An important observation from the results is that the network can only recover information based on the input image. If the artifacts occupy a large area in the image, the recovery of this area is an inference—much like what human observers would do in their mind—rather than a truthful reconstruction. Compared with phase cycling which obtains information from multiple frequencies, discrepancies may arise in those areas since phase cycling receives information that is invisible to the proposed method. Nevertheless, this problem may be resolved by combining e.g. two-fold phase cycling with a neural network. A standard linear combination from two-fold phase cycling may not result in satisfactory performance [
5]. With the help of neural networks, this task can be more easily solved, so that both scan time can be reduced and reconstruction quality can be improved. In our results, we have also observed slight blurring, which is more evident in the abdomen and less in the heart. Potential causes of the blurring include the limitation of U-Net on preservation of fine-grained details [
34], a lack of training data especially for the abdominal area, and the intrinsic blur in the SPC label due to a combination of multiple images acquired from different breath-holds. The use of more advanced architectures [
34] or loss functions [
35] and a collection of more training data may help to reduce the blurring. The slight blurring may explain the small LVEF discrepancy between the original cine and network-processed cine in the clinical dataset.
The unpaired, randomized evaluation of the network performance for different parameter variations confirmed that the method has a reasonable generalizability when sequence parameters or even the scanner is changed. While the performance was slightly reduced for certain parameter variations, such as the flip angle for band artifacts and slice thickness for flow artifacts, the performance reduction was within a reasonable range and did not significantly impair the image quality. The sensitivity of banding artifact suppression to a reduced flip angle may be explained by the poorer CNR and SNR. It is known that changes of image contrast can be a hurdle to generalization of deep learning models [
36]. The reduction of slice thickness is known to increase flow artifacts [
22]. An interesting finding is that the clinical dataset after processing by the proposed network had higher flow artifact scores than the baseline dataset. This may be due to a number of factors, such as the scanner difference, a better shimming, and subject characteristics. As many of the patients are elder people with cardiac dysfunction, their blood flow may be slower compared with young, healthy subjects, generating less flow artifacts.
Limitations
Our study has limitations. Firstly, the qualitative clinical evaluation was performed in an unblinded fashion due to the apparent differences between regular cine and the network output, and the need to evaluate interpretability of the method. Although the readers were required to strictly comply with the criteria, potential bias may exist in the scoring. Secondly, the evaluation was based on data collected from a single vendor in two centers, which include a research institution and a clinical center. The sample size for clinical evaluation of the method was relatively small. Although the current sample size is sufficient to verify feasibility of the proposed method, generalizability of it to multi-vendor cine data collected from a larger cohort at multiple clinical centers awaits to be investigated.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (
http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.