Skip to main content
Erschienen in: BMC Medical Imaging 1/2024

Open Access 01.12.2024 | Research

Image fusion-based low-dose CBCT enhancement method for visualizing miniscrew insertion in the infrazygomatic crest

verfasst von: Peipei Sun, Jinghui Yang, Xue Tian, Guohua Yuan

Erschienen in: BMC Medical Imaging | Ausgabe 1/2024

Abstract

Digital dental technology covers oral cone-beam computed tomography (CBCT) image processing and low-dose CBCT dental applications. A low-dose CBCT image enhancement method based on image fusion is proposed to address the need for subzygomatic small screw insertion. Specifically, firstly, a sharpening correction module is proposed, where the CBCT image is sharpened to compensate for the loss of details in the underexposed/over-exposed region. Secondly, a visibility restoration module based on type II fuzzy sets is designed, and a contrast enhancement module using curve transformation is designed. In addition to this, we propose a perceptual fusion module that fuses visibility and contrast of oral CBCT images. As a result, the problems of overexposure/underexposure, low visibility, and low contrast that occur in oral CBCT images can be effectively addressed with consistent interpretability. The proposed algorithm was analyzed in comparison experiments with a variety of algorithms, as well as ablation experiments. After analysis, compared with advanced enhancement algorithms, this algorithm achieved excellent results in low-dose CBCT enhancement and effective observation of subzygomatic small screw implantation. Compared with the best performing method, the evaluation metric is 0.07–2 higher on both datasets. The project can be found at: https://​github.​com/​sunpeipei2024/​low-dose-CBCT.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Low-dose CBCT oral image enhancement [14] is important in the field of dentistry and dentistry because it uses diagnostic examinations performed with low X-ray radiation doses to reduce the radiation exposure to the human body while improving the quality of the CBCT oral images and to provide a more accurate diagnosis and treatment plan for the patient’s oral disease. The following are some of the important implications of low-dose CBCT oral image enhancement: (1) Improvement of image quality [5]: low-dose CBCT is a technique for obtaining structural images of the oral cavity by X-ray scanning. By enhancing the images, the contrast and clarity of the images can be improved, helping the physician to better visualize oral structures, identify details and reduce noise. (2) Improve diagnostic accuracy [6]: oral image enhancement can help doctors diagnose problems with teeth and oral structures more accurately. (3) Reduced radiation dose [7]: An important advantage of low-dose CBCT is that it reduces the dose of X-ray radiation to which the patient is exposed. With image enhancement techniques, high quality images can be obtained at low radiation doses, thus reducing the potential risk of radiation to the patient.
CBCT is an important part of oral radiography and is now widely utilized in dental clinics due to its 3D high resolution at low cost [8]. For example, temporomandibular joint disorders, orthodontic treatment, and complex root canal treatment [9]. In orthodontic treatment, CBCT can provide accurate images of the hard tissues at the zygomatic alveolar ridge and infrazygomatic crest [10], in addition, as the only diagnostic imaging technique, it can roughly determine the structure and density of the jawbone, evaluate the anatomy of the cortical and cancellous bone, and give us a reference to add an assistive device in moving the posterior teeth [11]. However, CBCT is genotoxic and cytotoxic to oral mucosal cells not only in children but also in adults and may increase the shedding of human oral mucosal cells [12]. The risk of exposure to dental CBCT rays increases with dose. This radiation exposure may lead to tissue damage, especially to tissues of the head and neck [13]. Therefore, dentists should use CBCT scans with caution to minimize the radiation dose received by the patient and use lower dose scanning parameters whenever possible. In addition, patients should be informed of the potential risks and an adequate risk and benefit assessment should be performed before undergoing a CBCT scan [14]. CBCT images with stainless steel crowns implants or restorations will show streaks and shadows due to its imaging properties, which reduces the contrast of the image, and low-quality CBCT also introduces an additional error at the alignment stage, leading to a reduction in accuracy.
Bone class II high angle patients have overdeveloped posterior alveolar bone, often need to simulate Lefort type I surgery to depress the maxillary posterior teeth, and sometimes need the maximum support to retract the anterior teeth. Miniscrews meet these two requirements, the requirements of patient compliance is low, and the cost is less. However, the bone cortex of high-angle patients is thin, the alveolar bone thickness is relatively low, the infrazygomatic crest floor is low, and the miniscrews are easily loosened and dislodged. A large number of studies at home and abroad have been built on the study of CBCT, but because all kinds of miniscrews are basically made of stainless steel, it will have the problem of accuracy for CBCT images. In this paper, an in-depth study is conducted to improve the quality of CBCT images and to solve the problem of radiologic accuracy due to the low quality of CBCT images. Figure 1 shows CBCT imaging and the proposed method’s enhancement effect on low-dose CBCT images.
Each method has its advantages and disadvantages when dealing with low-dose CBCT oral image enhancement. For traditional methods, many articles only consider visibility or noise, but we believe that a combination of degradation factors is needed to make enhancement of low-dose CBCT images, which may be less practical for practical clinical applications. For deep learning methods, many articles usually require validation in the dental field. In addition to this, the complexity and black-box nature of deep learning models may raise some concerns in the clinical setting.
In this study, we take full advantage of mathematical transformations to address the problem of degradation in the quality of low-dose CBCT oral images by using an integrated approach to highlight the representation of different tasks. Unlike previous approaches that focus only on low visibility, we extensively consider the causes of degradation in low-dose CBCT oral images. First, we propose a sharpness enhancement method based on mathematical transformations to compensate for the loss of details in the exposed region. The method subtly enhances the sharpness of the image through mathematical transformations, thus improving the overall quality. Secondly, we designed a visibility restoration module based on type II fuzzy sets to deal with the problem of low-dose CBCT oral images more comprehensively. Meanwhile, we introduce a contrast enhancement module using curve transformation, which helps to improve the contrast in the image and makes the key features more prominent. In addition to this, we innovatively propose a perceptual fusion module that further improves oral CBCT images by fusing the information of visibility and contrast. This approach has rarely been studied in the field of low-dose CBCT oral image enhancement and provides new ideas and ways to solve related problems. It is worth mentioning that since our method is based on a non-physical model, it exhibits exciting characteristics in the representation of biological visual properties. Meanwhile, our method is highly interpretable, which enables us to clearly understand the contribution of each module to image enhancement, providing strong support for the tuning and improvement of the method. The main contributions of this paper are as follows:
  • To solve the problem of detail loss in the underexposed/ overexposed regions of low-dose CBCT oral images, we propose a sharpening enhancement method based on mathematical transforms.
  • To solve the problem of visibility and contrast degradation in low-dose CBCT oral images, we design a visibility restoration module based on type II fuzzy sets, and we design a contrast enhancement module using curve transformation.
  • We propose a perceptual fusion strategy that simultaneously considers two images, fuses the visibility restored and contrast enhanced CBCT images, and considers both pixel intensity and global gradient changes in both images.
In recent years, research for low-dose CBCT image enhancement has made significant progress in several areas. These researches mainly focus on two directions: traditional methods and deep learning methods. The studies of traditional methods [1520] mainly rely on classical image processing techniques and mathematical operations to improve the quality and sharpness of CBCT images. These methods include various filtering techniques, edge enhancement, histogram equalization, etc. By pre-processing, denoising and enhancing the images, the traditional methods try to improve the visualization of CBCT images. However, traditional methods may suffer from performance limitations when dealing with complex problems, especially when dealing with images with complex structures and noise. On the other hand, deep learning methods [2123] show great potential in CBCT image enhancement. These methods utilize deep learning models such as deep neural networks and convolutional neural networks to achieve automatic image restoration and enhancement by learning features and patterns from large amounts of data. Deep learning methods are able to learn more complex and abstract features from large amounts of data, and therefore usually achieve better results when dealing with complex problems in CBCT images.
Overall, there are advantages and disadvantages to both traditional and deep learning methods. Traditional methods, which rely on feature extraction designed by engineers and rules formulated by hand, have the advantage of being highly interpretable and computationally fast, but may be limited when dealing with complex problems. Deep learning methods [2426], on the other hand, are able to learn complex feature representations from a large amount of data and have stronger generalization capabilities, but require a large amount of labeled data and computational resources. Currently, many image enhancement applications have been proposed, showing the potential role of stochastic resonance [2731], underwater image enhancement [32], and dehazing [33] in enhancing contrast. In this section, we will introduce the application of the two methods on oral CBCT image enhancement, respectively.

Traditional methods

Many of the techniques in traditional methods are based on mathematical operations. For example, common filters (e.g., median filters, Gaussian filters) can reduce noise in an image by averaging or weighted averaging local pixel values over the image. Hart et al. [15] determined the optimal parameters in the variable kernel deformation image alignment of CBCT images to improve the accuracy and convergence of on-line adaptive radiotherapy. Churchill et al. [17] combined image processing techniques and statistical reconstruction by using initial filtered inverse projection reconstruction to create binary edge masks, which were then used for weighted regularized reconstruction. Chen et al. [18] proposed a new physical model-based approach to enhance the contrast of CT images. The input image is converted into a tissue parameter map using the relationship between tissue parameters. By using a classical parameter fitting model, a partially attenuated image with enhanced image contrast can be computed. Soltanian-Zadeh et al. [19] utilized the frequency characteristics of artifacts to identify and correct artifacts. Villain et al. [20] enhanced CT images by applying a semi-quadratic edge-preserving image restoration (or inverse convolution). This method can be used with almost any CT scanner as long as the overall point spread function can be roughly estimated. These methods are usually less adaptable to different image conditions and specific problems, and may not be able to accommodate a wide range of dental images. While traditional methods do not require much investment in training data, they may suffer from performance limitations when dealing with complex problems. The performance of these methods depends on the feature extraction designed by engineers and the rules formulated manually, and thus may not be well adapted to the complex structure and noise in the image. Compared to deep learning methods, traditional methods are usually more interpretable and computationally efficient, but may exhibit limitations when dealing with complex problems.

Deep learning methods

Deep learning methods have achieved excellent performance in image enhancement tasks to capture complex features and textures in images. Deep learning methods are highly adaptable and can be used for different types of dental images and various problems. Wang et al. [34] performed multiclass segmentation of jaws, teeth and background from CBCT scans. Fully automated segmentation method for simultaneous segmentation of two anatomical structures in CBCT scans. Kida et al. [35] proposed a method using deep neural network CBCT images in response to shorter time and fewer exposures for acquiring CBCT images. Madesta et al. [36] used a convolutional neural network architecture with residual dense networks for interpreting inter-motor variability of targets. Griner et al. [37] developed a deep learning method for scattering-induced artifacts that can significantly degrade image quality to empirically correct the most commonly observed artifacts in CBCT-based images. Deep learning methods typically require large amounts of label data for training, which can be challenging for some applications. Deep learning models are usually black-box and it is difficult to explain their inner workings. Deep learning models usually require large amounts of labeled data for training, especially in tasks that require learning complex features and patterns. For example, in an image enhancement task, if a deep learning model is to be trained to improve the quality of low-dose CBCT images, a large amount of paired data with corresponding high-quality images is required. Collecting and labeling these data may require significant time and human resources and may be impractical for some applications.

Proposed method

In this section, we illustrate the proposed method, which consists of four basic components. These four components include sharpening correction, visibility restoration, contrast enhancement and image fusion modules. The overall flowchart of the proposed method is shown in Fig. 2.

Sharpening correction module

First, we input the CBCT image and sharpen the input oral CBCT image, whose main function is to enhance the edges and details of the image so that the image looks clearer and sharper. We define the processing result of the sharpened image \(\hat Z\) by the following equation:
$$\hat Z = (I + \mathcal{N}\{ I - G*I\} )/2,$$
(1)
in Eq. (1), \(G * I\) is defined as the Gaussian filtering result of \(I\). \(\mathcal{N}\{ \cdot \}\) is defined as the normalization operator. Oral CBCT images are sharpened to make the CBCT image clearer and sharper. It helps to highlight the differences between the boundaries and regions in the image. Sharpening the image can make the edges of the objects clearer and more visible, thus improving the visual quality of the image.

Visibility recovery module

In the visibility restoration module, we work on the visual clarity enhancement of CBCT images. In order to realize the visual clarity recovery of CBCT images, we introduce an innovative method based on type II fuzzy sets. Based on the theory of type II fuzzy sets, we propose a new upper and lower range solution to the Hamacher t-conorm, and employ a transform-based gamma correction technique to accomplish the enhancement of CBCT image visibility. Through the combined application of these technological tools, we aim to improve the quality and clarity of CBCT images to more accurately support relevant applications and medical diagnosis. First, the mean \(\mu\) and standard deviation \(\sigma\) of the fuzzy image \(\hat Z(x)\) are calculated:
$$\mu = \frac{1}{n} \cdot \sum\limits_{i = 1}^n {\hat Z({x_i})} ,$$
(2)
$$\sigma = \sqrt {\frac{1}{n - 1} \cdot \sum\limits_{i = 1}^n {{{\left( {\hat Z({x_i}) - \mu } \right)}^2}} } ,$$
(3)
based on Eqs. (2) and (3), we compute the new lower bound for the Hamacher t-conorm. Here, we compute the new upper bound \(\hat u(x)\) by the following equation:
$$\hat u(x) = {\left( {\hat Z(x)} \right)^\alpha } + \left( {1 - {{\left( {\hat Z(x)} \right)}^\alpha }} \right) \cdot {\left( {\sigma^2} \right)^\alpha },$$
(4)
where \(\alpha = 0.95\). The new lower limit \(\hat w(x)\) is expressed using the following equation:
$$\hat w(x) = \left( {\frac{k \cdot \mu }{{\sigma + b}}} \right) \cdot \left( {\hat Z(x) - c \cdot \mu } \right) + {\mu^d}.$$
(5)
Hamacher t-conorm is an operation used in fuzzy logic to merge the membership values of two fuzzy sets. When calculating the new Hamacher t-conorm, we need to ensure that the updated lower and upper bounds are taken into account in order to accurately reflect the relationship between the fuzzy sets. This will ensure accurate results when dealing with fuzzy data, thus increasing the reliability of mathematical and statistical applications:
$$t(x) = \frac{{\hat u(x) + \hat w(x) + \left( {{\sigma^2} - 2} \right) \cdot \hat u(x) \cdot \hat w(x)}}{{1 - \left( {1 - {\sigma^2}} \right) \cdot \hat u(x) \cdot \hat w(x)}}.$$
(6)
Gamma correction can be used to improve the visual quality of CBCT images when they appear dull or unclear after processing. By remapping the pixel values of the input CBCT image, the sharpness and contrast of the image is enhanced. Gamma correction is based on a nonlinear transformation of the pixel values of an image using a gamma function. The gamma function adjusts the brightness and contrast of the image so that dark and bright details are more prominent:
$${L_1}(x) = \max \left( {t(x)} \right) \cdot {\left( {\frac{t(x)}{{\max \left( {t(x)} \right)}}} \right)^{1.5 \cdot \alpha }},$$
(7)
where \({L_1}(x)\) is the final output of the visibility restoration module.

Contrast enhancement module

The main goal of the contrast enhancement module is to improve the contrast of CBCT images. To achieve this goal, we first process the image using two unique curve transformation functions to produce an image that is significantly enhanced in contrast by combining their outputs. Next, we introduce a gamma-corrected stretching function that stretches the intensity of the image to conform to standard intervals. The key to this process is to effectively enhance the grayscale differences in the image through the combined application of different transforms and adjustments, making the details in the image more prominent and legible, and providing a more reliable basis for subsequent medical image analysis and diagnosis. Combining Eqs. (8) and (9), we can apply the probability density function and the soft additive function of the standard normal distribution to each pixel value of the CBCT image in order to realize the individual processing of the image and to improve the visual quality of the image and the ability to express information:
$$g(x) = \frac{1}{{\sqrt {2\pi } }}\exp \left( { - \frac{{{{\left( {\hat Z(x)} \right)}^2}}}{2}} \right),$$
(8)
$$s(x) = \log \left( {1 + \exp \left( {\hat Z(x)} \right)} \right).$$
(9)
Then, the logarithmic image processing method using Eq. (10) combines the outputs of these two methods:
$$l(x) = \sqrt {f(x) + s(x) + f(x) * s(x)} .$$
(10)
Finally, a gamma-controlled normalization function is applied via Eq. (11) in order to fully stretch the image intensities to standard intervals:
$${L_2}(x) = {\left( {\frac{{l(x) - \min \left( {l(x)} \right)}}{{\max \left( {l(x)} \right) - \min \left( {l(x)} \right)}}} \right)^\eta }.$$
(11)
The results obtained through the contrast enhancement module have improved contrast while maintaining brightness and natural. Where \(g(x)\) is the generated contrast modified image, \(\hat Z(x)\) is the input contrast distorted image and \({L_2}(x)\) is the contrast stretched image by normalization. Where \(\eta = 0.8\) is the gamma correction parameter responsible for adjusting the contrast of the image.

Perceptual fusion module

In this section, we successfully obtain visibility restored CBCT images and contrast enhanced CBCT images. Unlike traditional image fusion methods, our image fusion method stems from two independent tasks. Therefore, we propose a novel fusion strategy that simultaneously considers the weight assignment of the two images. This weight assignment consists of two aspects: a weight based on pixel intensity and a weight based on global gradient. By integrating the pixel-level intensity information and the gradient characteristics of the overall image, our method is able to capture and utilize the beneficial information generated by the two different tasks more comprehensively during the image fusion process, thus further improving the quality and information content of the synthesized image. This innovative weight adjustment strategy injects higher flexibility and adaptability into our image fusion method, enabling it to perform well in different scenarios and tasks.

Weight design based on pixel intensity

The pixel intensity based fused image \(F(x)\) as a weighted sum of images can be expressed as:
$$F(x) = {W_1}(x){L_1}(x) + {W_2}(x){L_2}(x),$$
(12)
where \({W_1}(x)\) and \({W_2}(x)\) denote the weights of the importance of pixels \({L_1}(x)\) and \({L_2}(x)\). Thus, \(W(x)\) gives more weight to regions where the pixel intensities perform well, \({m_n}\) is denoted as the average of the pixel intensities, and the weight should be larger when \({L_n}(x)\) is close to \(1 - {m_n}\), which can be denoted as \(\exp \left( { - {{\left( {{L_n}(x) - \left( {1 - {m_n}} \right)} \right)}^2}} \right)\). When processing an image, it is important to take into account the exposure level of the input image. This is because a large difference between the brightness of the images results in more well-exposed pixels. To account for this, a larger value of \({\sigma_N}\) is assigned when there is a significant difference in the average brightness between images. The weights based on pixel intensity are denoted as:
$${w_{1,n}}(x) = \exp \left( { - \frac{{{{\left( {{L_n}(x) - \left( {1 - {m_n}} \right)} \right)}^2}}}{2\sigma_n^2}} \right).$$
(13)
Among them.
$${\sigma_n} = \left\{ {\begin{array}{*{20}{l}} {1.5\left( {{m_{n + 1}} - {m_n}} \right)}&{n = 1} \\ {0.75\left( {{m_{n + 1}} - {m_{n - 1}}} \right)}&{1 < n < N} \\ {1.5\left( {{m_n} - {m_{n - 1}}} \right)}&{n = N} \end{array}} \right.,$$
(14)
where \(N\) is the number of images in a set of images. In Eq. (13), dark pixels are assigned a larger weight when \({m_n}\) is close to 1 and vice versa. In addition, when the average brightness is significantly different between images, the weights are assigned larger values.

Weight design based on global gradient

We observe that in regions lacking texture, images often have low contrast or small gradient values. Therefore, emphasizing only large gradient areas may not effectively highlight pixels within areas with smaller gradients. Based on this understanding, we introduce a global gradient weighting method aimed at emphasizing global contrast. In images with higher contrast, the cumulative histogram has smaller gradient values. Therefore, we need to give greater weight to pixels when they lie within the range of the cumulative histogram with relatively small gradients. In other words, we need to dynamically adjust the weight of each pixel based on its position and gradient information in the image. In areas with smaller gradients, we would expect the pixels to contribute more, as these tend to be areas of the image that lack texture. Therefore, we design a global gradient weighting method to better consider the global contrast when processing images and dynamically adjust the weight of pixels according to the gradient distribution of the image. This global gradient-based weight adjustment strategy makes our image fusion method more intelligent and comprehensive, able to adapt more flexibly under different image characteristics and contrast conditions, and effectively improve the overall image quality and information transfer. The weights based on global gradient can be expressed by the following equation:
$$w_{2,n}(x)=\frac{\operatorname{Grad}_n\left(L_n(x)\right)^{-1}}{\sum\limits_{n=1}^N\operatorname{Grad}_n\left(L_n(x)\right)^{-1}+\epsilon},$$
(15)
where \(\epsilon\) is a very small positive value and \({\operatorname{Grad}_n}\left( {{L_n}(x)} \right)\) denotes the gradient of the cumulative histogram when the intensity is \({L_n}(x)\). In image processing, global gradient refers to the ability to perform a more comprehensive analysis of a CBCT image while taking into account the overall characteristics of the image. While traditional local gradient methods focus on local variations around specific pixels, global gradient methods capture a wider range of image contextual information by considering the gradients of distant pixels. This approach contributes to a better understanding of the structure and features of the entire image, thus improving the analysis of CBCT images. To calculate the final weights for each CBCT image, the two weights are combined and normalized using a specific equation:
$$W_n(x)=\frac{w_{1,n}(x)\times w_{2,n}(x)}{\sum\limits_{n=1}^Nw_{1,n}(x)\times w_{2,n}(x)+\epsilon}.$$
(16)
Using the weights obtained by Eq. (16), we can fuse the images according to Eq. (12).

Results

In this section, we will discuss in depth the methods used for evaluation to get a full picture of their performance benefits. We are committed to a comprehensive evaluation of the proposed methods to ensure their validity and reliability in practical applications. To achieve this goal, we employ a variety of evaluation methods, including quantitative metrics and qualitative analysis, to assess the performance of the methods from different perspectives. In addition to the quantitative metrics, we will also perform a qualitative analysis to visually assess the processed images to visually observe the image quality, detail retention, and other aspects of the evaluation. This evaluation method can provide intuition and help us understand how the method performs in real scenes. To verify the effectiveness of each module, we also conducted a series of ablation experiments. By testing each module independently, we can gain insight into its impact on the final results and determine the performance contribution of each module. This step-by-step validation approach helps to ensure that each component of the methodology works effectively and ultimately yields high-quality results. We chose MATLAB 2022a as the tool that would support us in performing various image processing operations and experiments. The data for this article can be found at: https://​osf.​io/​f9r8v/​.

Experiment settings

In this section, we provide a thorough description of the experimental setup, specifically, we applied four reference-free evaluation metrics to fully validate two reference-free image datasets. We comprehensively compare nine of the most representative and state-of-the-art image enhancement methods. First, we used four reference-free evaluation metrics to ensure a comprehensive assessment of image enhancement methods. These metrics consider the performance of image quality, contrast, and detail retention to provide comprehensive information for the experimental results. We chose two datasets with no reference images so that the performance of various image enhancement methods can be verified more comprehensively. It helps to evaluate the generalization ability of the methods. In this experiment, we compare nine of the most representative and state-of-the-art image enhancement methods. We choose these methods based on their wide application and sophistication in the literature to ensure that our comparison is representative. By using multiple evaluation metrics on different reference-free image datasets, we were able to gain a comprehensive understanding of the performance of each image enhancement method, providing a solid foundation for further analysis and conclusions. Such an experimental design and detailed validation process help to ensure our objective evaluation of the performance of image enhancement methods.

Compared methods

Nine image enhancement methods including CEDN [38], NLM [39], NNC [40], MID [41], BCD [42], GM [43], DCFD [44], DPRN [45], and SDCN [46] were compared on Test 1 and Test 2 low-dose CBCT oral miniscrew image datasets.

No-reference image quality assessment metrics

No-reference image quality assessment metrics means that reference information is not available for predicting image quality. When confronted with oral CBCT, we can only obtain the current oral CBCT image and evaluate the enhanced image. In this paper, we use the following no-reference image evaluation metrics as the basis: Brisque [47], natural image quality evaluator (NIQE) [48], FADE [49], average gradient (AG) [50]. The lower the Brisque [47] score, the higher the filtered image’s The lower the Brisque [47] score, the higher the fidelity of the filtered image and the less detail information is lost. The lower the NIQE [48] score, the more natural the image performance. The lower the FADE [49] score, the lower the density of fog. The higher the AG [50] score, the more detail and the higher the clarity of the image.

Qualitative and quantitative comparisons on the Test 1 dataset

Qualitative comparisons

First, we compared the different methods in the Test1 dataset. As shown in Fig. 3, the CBCT image after CEDN [38] processing introduces new noise and exhibits an obvious white tone bias. The NLM-corrected image [39] shows more blocky areas and important details are not sufficiently highlighted, while producing obvious distortion. MID [41] improves the contrast of the panoramic image, but the correction effect is not obvious enough, and the process may lead to halo-like artifacts. NNC [40] Although the contrast was corrected to a certain extent, the overall image visibility was low. BCD [42] introduced a haze-like situation, which affected the image to a certain extent. GM [43] led to color deviation and local noise in the image. DCFD [44] failed to highlight local details in the overall image. DPRN [45] introduced unwanted white and grey noise during the correction process. The overall whiteness characteristic of the SDCN-enhanced image [46] and the enhancement information is not clear enough. In contrast, our method enhances the details of the CBCT image to the maximum extent, highlights the local contrast of the image, and successfully avoids the white balance distortion problem.

Quantitative comparisons

In order to comprehensively assess the differences in the performance of CBCT oral miniscrew images in terms of contrast, white balance, and visibility correction, we employed a variety of quantitative scoring metrics for in-depth analysis. Specifically, we utilized metrics such as Brisque [47], NIQE [48], FADE [49], and AG [50] scores to quantitatively assess the performance of different methods on the Test 1 dataset, and the relevant results are shown in Table 1. In the analysis of the Test 1 dataset, our method consistently performs well on all evaluation metrics, significantly outperforming the comparison methods. This demonstrates the superior performance of our method in terms of contrast, white balance and visibility correction. Considering all metrics together, our results clearly demonstrate the importance of robust image processing techniques to enhance the understanding of the oral miniscrew environment. These results not only emphasize the superiority of our method, but also the critical role of image processing in CBCT oral image analysis.
Table 1
The results of the unreferenced evaluation metrics compared to the nine algorithms are in Test 1
Methods
Test 1
Brisque↓
NIQE↓
FADE↓
AG↑
CEDN
70.6131
8.2567
1.1123
2.8567
NLM
72.3456
7.8976
1.0376
3.1590
NNC
69.1234
7.6234
1.1456
3.0234
MID
70.4321
8.7654
1.1832
2.9812
BCD
68.9876
8.4321
1.0598
3.1123
GM
78.1234
7.5432
1.1987
3.0456
DCFD
71.2345
8.3210
1.0054
2.8976
DPRN
79.8765
8.1234
1.0234
3.0654
SDCN
70.3615
7.9876
1.1256
3.1987
Our
67.6204
7.4832
0.9624
3.3588

Qualitative and quantitative comparisons on the Test 2 dataset

Qualitative comparisons

First, we compared the different methods on the Test2 dataset. As shown in Fig. 4, the CEDN [38] -processed image shows an overall noise phenomenon, especially metal artifacts are more obvious, which will significantly reduce the accuracy of the detection of the arch and arch thickness. The NLM [39] method introduces new noise, which makes the overall distribution become uneven. Although the NNC [40] method enhances the details of the image to a certain extent, it also introduces some noise. MID [41] retains the details of the image but introduces some foggy information. BCD [42] method does not show significant effect in image enhancement and cannot overcome the effect of the metal artifacts through post-processing. GM [43] method introduces white tones but weakens the expression of the detail information. MID [41] method introduces white tones but weakens the expression of the detail information. The DCFD [44] method was able to enhance the image contrast relatively stably, but still failed to highlight the details of the image. The DPRN [45] method enhanced the image contrast with white balance distortion, especially when the halo artifact phenomenon became obvious when the regions in the image were brighter than the surrounding regions. The halo phenomenon is manifested in the image as the edge portions of the highlighted regions show edges with lower brightness. On the contrary, the SDCN [46] method performs well in presenting detailed information in the image, but the visibility of the image is low. Comparatively, our method maximizes the details of the CBCT image while successfully avoiding the problem of different white balance distortions.

Quantitative comparisons

In order to comprehensively assess the performance of different methods on CBCT oral miniscrew images in terms of contrast, white balance, and visibility correction, we used a variety of scoring metrics for quantitative analysis. Specifically, we use four evaluation metrics: Brisque [47], NIQE [48], FADE [49], and AG [50]. We are able to clearly evaluate the performance of different algorithms through the results on the Test 2 dataset, and the detailed results are listed in Table 2. The experimental results on the Test 2 dataset show that our method outperforms the comparison algorithms on all four evaluation metrics, which suggests that the proposed algorithm is very good at visibility and white balance correction. In conclusion, by quantitatively comparing the proposed algorithm with several algorithms, we demonstrate the advantages of the proposed algorithm and analyze the importance of understanding CBCT images of small oral screws.
Table 2
The results of the unreferenced evaluation metrics compared to the nine algorithms are in Test 2
Methods
Test 2
Brisque↓
NIQE↓
FADE↓
AG↑
CEDN
51.2345
7.1234
1.7856
1.8790
NLM
54.5678
7.7890
1.5234
1.5432
NNC
52.1234
8.2345
1.9378
1.9876
MID
50.9876
6.9876
2.0456
1.3210
BCD
53.4321
7.4567
1.6543
2.0345
GM
51.8765
8.0987
1.8567
1.6543
DCFD
54.3210
6.8765
1.7890
1.7890
DPRN
50.3456
8.3456
2.0765
1.4321
SDCN
52.7890
7.5432
1.4321
1.9987
Our
49.6822
6.6040
1.3184
2.2253

Ablation experiment

In this section, two image datasets are selected to ensure the generalizability of the experimental results. Select the image processing algorithms benchmarked by the proposed method for comparing the effectiveness of other processing methods. Apply the designed processing scheme to the selected image datasets to generate the processed images. Evaluate the processed images using the selected evaluation metrics and record the results. Ensure that sufficient sample size and statistical analysis methods are used in the experiment to draw reliable conclusions. We performed a quantitative evaluation through ablation experiments, the relevant results of which are listed in Table 3. Through these experiments, we gained a deeper understanding of each component’s contribution to improving the algorithm’s performance: 1) our method was performed without the visibility restoration module (-w/o VRM). 2) our method was performed without the contrast enhancement module (-w/o CEM). We evaluated and analyzed the effectiveness of this technique by means of an oral CBCT low-dose enhancement ablation experiment. The results of the oral CBCT low-dose enhancement ablation experiments demonstrated significant effectiveness in improving the quality of oral CBCT images and visualization of anatomical structures.
Table 3
Results of ablation experiments on two datasets
Ablated models
Test 1
Test 2
Brisque↓
NIQE↓
FADE↓
AG↑
Brisque↓
NIQE↓
FADE↓
AG↑
-w/o VRM
68.170
7.501
1.131
3.190
50.181
6.938
1.591
2.140
-w/o CEM
69.601
7.638
1.058
2.938
51.308
6.829
1.641
1.904
Full model
67.620
7.483
0.962
3.218
49.682
6.604
1.432
2.225

Discussion

The proposed method takes into account the characteristics of oral low-dose CBCT images, overcomes the artifacts, detail loss, and color distortion caused by excessive enhancement of oral low-dose CBCT images, and maximizes the enhancement of oral low-dose CBCT images while retaining details and structural information. Judging from the evaluation index results of the two data sets, the proposed method has excellent results, but the method in this paper still has limitations and challenges. In order to improve the quality of low-dose CBCT images, more complex image reconstruction algorithms are necessary, such as iterative reconstruction techniques. However, these algorithms are computationally intensive and have high requirements on hardware equipment, which may increase costs and processing time. In addition, the trade-off between image noise and resolution is very important. While reducing radiation dose, the noise level of the image will often increase, which will affect the image quality. Improving noise control techniques often sacrifices some spatial resolution, an important consideration in medical applications that require high-precision diagnostics. In the future, we will work on solving trade-off issues like noise and resolution, and solving this problem based on deep learning is the focus of our research.

Conclusion

In this paper, we propose an enhancement method for low-dose CBCT oral images based on image fusion. Specifically, we consider the low-dose CBCT image over/underexposure, visibility and contrast issues. In order to compensate for the loss of detail in the under/over-exposed regions, a sharpening correction module is proposed. A visibility restoration module based on type-II fuzzy sets is designed, and a contrast enhancement module using curve transformation is designed. In addition to this, we propose a perceptual fusion module that fuses visibility and contrast of oral CBCT images.
In this experiment, we conducted a detailed study for oral CBCT low-dose enhancement. By comparing the experimental results, compared with such enhancement methods, our method can effectively reduce the noise and artifacts in the image and improve the clarity and contrast of the image. It is able to observe the implantation of small subzygomatic screws more accurately. The details and contours of the image are clearer, which helps doctors to make accurate diagnosis and treatment planning. Our method maintains consistent interpretability, allowing physicians to understand the process of image enhancement and trust the results. Based on the results of our current study, we will continue to explore the following aspects to further improve the oral CBCT low-dose enhancement technique. We will further optimize the deep learning model to improve the enhancement of low-dose CBCT images.

Acknowledgements

Not applicable.

Declarations

Not applicable.
All authors agreed to publish this article.

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Vijayan S, Luo MJ, Wu S, et al. Image enhancement of ultra-low dose CBCT images using a deep generative model. Oral Surg Oral Med Oral Pathol Oral Radiol. 2022;134(3):e72.CrossRef Vijayan S, Luo MJ, Wu S, et al. Image enhancement of ultra-low dose CBCT images using a deep generative model. Oral Surg Oral Med Oral Pathol Oral Radiol. 2022;134(3):e72.CrossRef
2.
Zurück zum Zitat Matenine D, Schmittbuhl M, Bedwani S, et al. Iterative reconstruction for image enhancement and dose reduction in diagnostic cone beam CT imaging. J Xray Sci Technol. 2019;27(5):805–19.PubMed Matenine D, Schmittbuhl M, Bedwani S, et al. Iterative reconstruction for image enhancement and dose reduction in diagnostic cone beam CT imaging. J Xray Sci Technol. 2019;27(5):805–19.PubMed
3.
Zurück zum Zitat Ihlis RL, Kadesjö N, Tsilingaridis G, Benchimol D, Shi XQ. Image quality assessment of low-dose protocols in cone beam computed tomography of the anterior maxilla. Oral Surg Oral Med Oral Pathol Oral Radiol. 2022;133(4):483-91. Ihlis RL, Kadesjö N, Tsilingaridis G, Benchimol D, Shi XQ. Image quality assessment of low-dose protocols in cone beam computed tomography of the anterior maxilla. Oral Surg Oral Med Oral Pathol Oral Radiol. 2022;133(4):483-91.
4.
Zurück zum Zitat Tsiklakis K, Donta C, Gavala S, et al. Dose reduction in maxillofacial imaging using low dose Cone Beam CT. Eur J Radiol. 2005;56(3):413–7.CrossRefPubMed Tsiklakis K, Donta C, Gavala S, et al. Dose reduction in maxillofacial imaging using low dose Cone Beam CT. Eur J Radiol. 2005;56(3):413–7.CrossRefPubMed
5.
Zurück zum Zitat Hyun CM, Bayaraa T, Yun HS, Jang TJ, Park HS, Seo JK. Deep learning method for reducing metal artifacts in dental cone-beam CT using supplementary information from intra-oral scan. Phys Med Biol. 2022;67(17). Hyun CM, Bayaraa T, Yun HS, Jang TJ, Park HS, Seo JK. Deep learning method for reducing metal artifacts in dental cone-beam CT using supplementary information from intra-oral scan. Phys Med Biol. 2022;67(17).
6.
Zurück zum Zitat van Bunningen RH, Dijkstra PU, Dieters A, van der Meer WJ, Kuijpers-Jagtman AM, Ren Y. Precision of orthodontic cephalometric measurements on ultra low dose-low dose CBCT reconstructed cephalograms. Clin Oral Investig. 2022;26(2):1543–50. https://doi.org/10.1007/s00784-021-04127-9. van Bunningen RH, Dijkstra PU, Dieters A, van der Meer WJ, Kuijpers-Jagtman AM, Ren Y. Precision of orthodontic cephalometric measurements on ultra low dose-low dose CBCT reconstructed cephalograms. Clin Oral Investig. 2022;26(2):1543–50. https://​doi.​org/​10.​1007/​s00784-021-04127-9.
7.
Zurück zum Zitat Hao J, Zhang L, Li L, et al. A comparison of projection domain noise reduction methods in low-dose dental CBCT. In: 2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC). Anaheim: IEEE; 2012. p. 3624–7.CrossRef Hao J, Zhang L, Li L, et al. A comparison of projection domain noise reduction methods in low-dose dental CBCT. In: 2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC). Anaheim: IEEE; 2012. p. 3624–7.CrossRef
15.
Zurück zum Zitat Hart V, Burrow D, Li XA. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images. Phys Med Biol. 2017;62(15):6246.CrossRefPubMed Hart V, Burrow D, Li XA. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images. Phys Med Biol. 2017;62(15):6246.CrossRefPubMed
16.
Zurück zum Zitat Reaungamornrat S, Wang AS, Uneri A, et al. Deformable image registration with local rigidity constraints for cone-beam CT-guided spine surgery. Phys Med Biol. 2014;59(14):3761.CrossRefPubMedPubMedCentral Reaungamornrat S, Wang AS, Uneri A, et al. Deformable image registration with local rigidity constraints for cone-beam CT-guided spine surgery. Phys Med Biol. 2014;59(14):3761.CrossRefPubMedPubMedCentral
17.
Zurück zum Zitat Churchill V, Gelb A. Edge-masked CT image reconstruction from limited data. In: 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine. SPIE. 2019;11072:320–4. Churchill V, Gelb A. Edge-masked CT image reconstruction from limited data. In: 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine. SPIE. 2019;11072:320–4.
18.
Zurück zum Zitat Chen YW, Shih CT, Lin HH, et al. Physical model-based contrast enhancement of computed tomography images: contrast enhancement of computed tomography. In: 2016 IEEE 16th International Conference on Bioinformatics and Bioengineering (BIBE). Taichung: IEEE; 2016. p. 238–41.CrossRef Chen YW, Shih CT, Lin HH, et al. Physical model-based contrast enhancement of computed tomography images: contrast enhancement of computed tomography. In: 2016 IEEE 16th International Conference on Bioinformatics and Bioengineering (BIBE). Taichung: IEEE; 2016. p. 238–41.CrossRef
19.
Zurück zum Zitat Soltanian-Zadeh H, Windham JP, Soltanianzadeh J. CT artifact correction: an image-processing approach. In: Medical imaging 1996: image processing, vol. 2710. Newport Beach: SPIE; 1996. p. 477–85.CrossRef Soltanian-Zadeh H, Windham JP, Soltanianzadeh J. CT artifact correction: an image-processing approach. In: Medical imaging 1996: image processing, vol. 2710. Newport Beach: SPIE; 1996. p. 477–85.CrossRef
20.
Zurück zum Zitat Villain N, Goussard Y, Idier J, et al. Three-dimensional edge-preserving image enhancement for computed tomography. IEEE Trans Med Imaging. 2003;22(10):1275–87.CrossRefPubMed Villain N, Goussard Y, Idier J, et al. Three-dimensional edge-preserving image enhancement for computed tomography. IEEE Trans Med Imaging. 2003;22(10):1275–87.CrossRefPubMed
21.
Zurück zum Zitat Lei Y, Wang T, Harms J, et al. Image quality improvement in cone-beam CT using deep learning. In: Medical imaging 2019: physics of medical imaging, vol. 10948. San Diego: SPIE; 2019. p. 556–61. Lei Y, Wang T, Harms J, et al. Image quality improvement in cone-beam CT using deep learning. In: Medical imaging 2019: physics of medical imaging, vol. 10948. San Diego: SPIE; 2019. p. 556–61.
22.
Zurück zum Zitat Jiang Z, Chen Y, Zhang Y, et al. Augmentation of CBCT reconstructed from under-sampled projections using deep learning. IEEE Trans Med Imaging. 2019;38(11):2705–15.CrossRefPubMedPubMedCentral Jiang Z, Chen Y, Zhang Y, et al. Augmentation of CBCT reconstructed from under-sampled projections using deep learning. IEEE Trans Med Imaging. 2019;38(11):2705–15.CrossRefPubMedPubMedCentral
23.
Zurück zum Zitat Zhang Y, Yue N, Su MY, et al. Improving CBCT quality to CT level using deep learning with generative adversarial network. Med Phys. 2021;48(6):2816–26.CrossRefPubMed Zhang Y, Yue N, Su MY, et al. Improving CBCT quality to CT level using deep learning with generative adversarial network. Med Phys. 2021;48(6):2816–26.CrossRefPubMed
24.
Zurück zum Zitat Ren Z, Kong X, Zhang Y, et al. UKSSL: underlying knowledge based semi-supervised learning for medical image classification. IEEE Open J Eng Med Biol. 2023;1–8. Ren Z, Kong X, Zhang Y, et al. UKSSL: underlying knowledge based semi-supervised learning for medical image classification. IEEE Open J Eng Med Biol. 2023;1–8.
25.
Zurück zum Zitat Zhang Y, Deng L, Zhu H, et al. Deep learning in food category recognition. Inform Fusion. 2023;98:101859.CrossRef Zhang Y, Deng L, Zhu H, et al. Deep learning in food category recognition. Inform Fusion. 2023;98:101859.CrossRef
26.
Zurück zum Zitat Ren Z, Wang S, Zhang Y. Weakly supervised machine learning. CAAI Trans Intell Technol. 2023;8(3):549–80.CrossRef Ren Z, Wang S, Zhang Y. Weakly supervised machine learning. CAAI Trans Intell Technol. 2023;8(3):549–80.CrossRef
27.
Zurück zum Zitat Mohanty S, Dakua SP. Toward computing cross-modality symmetric non-rigid medical image registration. IEEE Access. 2022;10:24528–39.CrossRef Mohanty S, Dakua SP. Toward computing cross-modality symmetric non-rigid medical image registration. IEEE Access. 2022;10:24528–39.CrossRef
28.
Zurück zum Zitat Regaya Y, Amira A, Dakua SP. Development of a cerebral aneurysm segmentation method to prevent sentinel hemorrhage. Netw Model Anal Health Inform Bioinform. 2023;12(1):18.CrossRef Regaya Y, Amira A, Dakua SP. Development of a cerebral aneurysm segmentation method to prevent sentinel hemorrhage. Netw Model Anal Health Inform Bioinform. 2023;12(1):18.CrossRef
29.
Zurück zum Zitat Dakua SP, Abinahed J, Al-Ansari A. Pathological liver segmentation using stochastic resonance and cellular automata. J Vis Commun Image Represent. 2016;34:89–102. ScienceDirect (Elsevier).CrossRef Dakua SP, Abinahed J, Al-Ansari A. Pathological liver segmentation using stochastic resonance and cellular automata. J Vis Commun Image Represent. 2016;34:89–102. ScienceDirect (Elsevier).CrossRef
30.
Zurück zum Zitat Dakua SP. LV segmentation using stochastic resonance and evolutionary cellular automata. Int J Pattern Recognit Artif Intell. 2015;29(03):1557002.CrossRef Dakua SP. LV segmentation using stochastic resonance and evolutionary cellular automata. Int J Pattern Recognit Artif Intell. 2015;29(03):1557002.CrossRef
31.
Zurück zum Zitat Dakua SP, Abinahed J, Al-Ansari A. Cellular automata-based left ventricle reconstruction from magnetic resonance images. Comput Methods Biomech Biomed Eng Imaging Vis. 2017;5(1):54–67.CrossRef Dakua SP, Abinahed J, Al-Ansari A. Cellular automata-based left ventricle reconstruction from magnetic resonance images. Comput Methods Biomech Biomed Eng Imaging Vis. 2017;5(1):54–67.CrossRef
32.
Zurück zum Zitat An S, Xu L, Senior Member I, et al. HFM: a hybrid fusion method for underwater image enhancement. Eng Appl Artif Intell. 2024;127:107219.CrossRef An S, Xu L, Senior Member I, et al. HFM: a hybrid fusion method for underwater image enhancement. Eng Appl Artif Intell. 2024;127:107219.CrossRef
33.
Zurück zum Zitat An S, Huang X, Wang L, et al. Semi-supervised image dehazing network. Vis Comput. 2022;38(6):2041–55.CrossRef An S, Huang X, Wang L, et al. Semi-supervised image dehazing network. Vis Comput. 2022;38(6):2041–55.CrossRef
34.
Zurück zum Zitat Wang H, Minnema J, Batenburg KJ, et al. Multiclass CBCT image segmentation for orthodontics with deep learning. J Dent Res. 2021;100(9):943–9.CrossRefPubMed Wang H, Minnema J, Batenburg KJ, et al. Multiclass CBCT image segmentation for orthodontics with deep learning. J Dent Res. 2021;100(9):943–9.CrossRefPubMed
35.
Zurück zum Zitat Kida S, Kaji S, Nawa K, et al. Visual enhancement of cone-beam CT by use of CycleGAN. Med Phys. 2020;47(3):998–1010.CrossRefPubMed Kida S, Kaji S, Nawa K, et al. Visual enhancement of cone-beam CT by use of CycleGAN. Med Phys. 2020;47(3):998–1010.CrossRefPubMed
36.
Zurück zum Zitat Madesta F, Sentker T, Gauer T, et al. Self-contained deep learning-based boosting of 4D cone-beam CT reconstruction. Med Phys. 2020;47(11):5619–31.CrossRefPubMed Madesta F, Sentker T, Gauer T, et al. Self-contained deep learning-based boosting of 4D cone-beam CT reconstruction. Med Phys. 2020;47(11):5619–31.CrossRefPubMed
37.
Zurück zum Zitat Griner D, Garrett JW, Li Y, et al. Correction for cone beam CT image artifacts via a deep learning method. In: medical imaging 2020: physics of medical imaging, vol. 11312. Houston: SPIE; 2020. p. 1104–10. Griner D, Garrett JW, Li Y, et al. Correction for cone beam CT image artifacts via a deep learning method. In: medical imaging 2020: physics of medical imaging, vol. 11312. Houston: SPIE; 2020. p. 1104–10.
38.
Zurück zum Zitat Shan H, Zhang Y, Yang Q, et al. 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network. IEEE Trans Med Imaging. 2018;37(6):1522–34.CrossRefPubMedPubMedCentral Shan H, Zhang Y, Yang Q, et al. 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network. IEEE Trans Med Imaging. 2018;37(6):1522–34.CrossRefPubMedPubMedCentral
39.
Zurück zum Zitat Green M, Marom EM, Kiryati N, et al. Efficient low-dose CT denoising by locally-consistent non-local means (LC-NLM). In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part III 19. Berlin: Springer International Publishing; 2016. p. 423–31. Green M, Marom EM, Kiryati N, et al. Efficient low-dose CT denoising by locally-consistent non-local means (LC-NLM). In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part III 19. Berlin: Springer International Publishing; 2016. p. 423–31.
40.
Zurück zum Zitat Suzuki K, Liu J, Zarshenas A, et al. Neural network convolution (nnc) for converting ultra-low-dose to “virtual” high-dose ct images. In: Machine Learning in Medical Imaging: 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 10, 2017, Proceedings 8. Berlin: Springer International Publishing; 2017. p. 334–43.CrossRef Suzuki K, Liu J, Zarshenas A, et al. Neural network convolution (nnc) for converting ultra-low-dose to “virtual” high-dose ct images. In: Machine Learning in Medical Imaging: 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 10, 2017, Proceedings 8. Berlin: Springer International Publishing; 2017. p. 334–43.CrossRef
41.
Zurück zum Zitat Wu D, Gong K, Kim K, et al. Consensus neural network for medical imaging denoising with only noisy training samples. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing; 2019. p. 741–9. Wu D, Gong K, Kim K, et al. Consensus neural network for medical imaging denoising with only noisy training samples. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing; 2019. p. 741–9.
42.
Zurück zum Zitat Chun IY, Zheng X, Long Y, et al. BCD-Net for low-dose CT reconstruction: acceleration, convergence, and generalization. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22. Berlin: Springer International Publishing; 2019. p. 31–40. Chun IY, Zheng X, Long Y, et al. BCD-Net for low-dose CT reconstruction: acceleration, convergence, and generalization. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22. Berlin: Springer International Publishing; 2019. p. 31–40.
43.
Zurück zum Zitat Zhang R, Ye DH, Pal D, et al. A Gaussian mixture MRF for model-based iterative reconstruction with applications to low-dose X-ray CT. IEEE Trans Comput Imaging. 2016;2(3):359–74.CrossRef Zhang R, Ye DH, Pal D, et al. A Gaussian mixture MRF for model-based iterative reconstruction with applications to low-dose X-ray CT. IEEE Trans Comput Imaging. 2016;2(3):359–74.CrossRef
44.
Zurück zum Zitat Kang E, Chang W, Yoo J, et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE Trans Med Imaging. 2018;37(6):1358–69.CrossRefPubMed Kang E, Chang W, Yoo J, et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE Trans Med Imaging. 2018;37(6):1358–69.CrossRefPubMed
45.
Zurück zum Zitat Yin X, Zhao Q, Liu J, et al. Domain progressive 3D residual convolution network to improve low-dose CT imaging. IEEE Trans Med Imaging. 2019;38(12):2903–13.CrossRefPubMed Yin X, Zhao Q, Liu J, et al. Domain progressive 3D residual convolution network to improve low-dose CT imaging. IEEE Trans Med Imaging. 2019;38(12):2903–13.CrossRefPubMed
46.
Zurück zum Zitat Liu P, Fang R. SDCNet: Smoothed dense-convolution network for restoring low-dose cerebral CT perfusion. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington: IEEE; 2018. p. 349–52.CrossRef Liu P, Fang R. SDCNet: Smoothed dense-convolution network for restoring low-dose cerebral CT perfusion. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington: IEEE; 2018. p. 349–52.CrossRef
47.
Zurück zum Zitat Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process. 2012;21(12):4695–708.CrossRefPubMed Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process. 2012;21(12):4695–708.CrossRefPubMed
48.
Zurück zum Zitat Mittal A, Soundararajan R, Bovik AC. Making a “completely blind” image quality analyzer. IEEE Signal Process Lett. 2012;20(3):209–12.CrossRef Mittal A, Soundararajan R, Bovik AC. Making a “completely blind” image quality analyzer. IEEE Signal Process Lett. 2012;20(3):209–12.CrossRef
49.
Zurück zum Zitat Choi LK, You J, Bovik AC. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans Image Process. 2015;24(11):3888–901.CrossRefPubMed Choi LK, You J, Bovik AC. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans Image Process. 2015;24(11):3888–901.CrossRefPubMed
50.
Zurück zum Zitat Liu R, Ma L, Zhang J, Fan X, Luo Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR). 2021. p. 10556–65. Liu R, Ma L, Zhang J, Fan X, Luo Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR). 2021. p. 10556–65.
Metadaten
Titel
Image fusion-based low-dose CBCT enhancement method for visualizing miniscrew insertion in the infrazygomatic crest
verfasst von
Peipei Sun
Jinghui Yang
Xue Tian
Guohua Yuan
Publikationsdatum
01.12.2024
Verlag
BioMed Central
Erschienen in
BMC Medical Imaging / Ausgabe 1/2024
Elektronische ISSN: 1471-2342
DOI
https://doi.org/10.1186/s12880-024-01289-2

Weitere Artikel der Ausgabe 1/2024

BMC Medical Imaging 1/2024 Zur Ausgabe

Mammakarzinom: Brustdichte beeinflusst rezidivfreies Überleben

26.05.2024 Mammakarzinom Nachrichten

Frauen, die zum Zeitpunkt der Brustkrebsdiagnose eine hohe mammografische Brustdichte aufweisen, haben ein erhöhtes Risiko für ein baldiges Rezidiv, legen neue Daten nahe.

„Übersichtlicher Wegweiser“: Lauterbachs umstrittener Klinik-Atlas ist online

17.05.2024 Klinik aktuell Nachrichten

Sie sei „ethisch geboten“, meint Gesundheitsminister Karl Lauterbach: mehr Transparenz über die Qualität von Klinikbehandlungen. Um sie abzubilden, lässt er gegen den Widerstand vieler Länder einen virtuellen Klinik-Atlas freischalten.

Klinikreform soll zehntausende Menschenleben retten

15.05.2024 Klinik aktuell Nachrichten

Gesundheitsminister Lauterbach hat die vom Bundeskabinett beschlossene Klinikreform verteidigt. Kritik an den Plänen kommt vom Marburger Bund. Und in den Ländern wird über den Gang zum Vermittlungsausschuss spekuliert.

Darf man die Behandlung eines Neonazis ablehnen?

08.05.2024 Gesellschaft Nachrichten

In einer Leseranfrage in der Zeitschrift Journal of the American Academy of Dermatology möchte ein anonymer Dermatologe bzw. eine anonyme Dermatologin wissen, ob er oder sie einen Patienten behandeln muss, der eine rassistische Tätowierung trägt.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.